 My name is Diego Melgar from the University of Oregon, and good morning, good afternoon, or good evening wherever you might be, and welcome to our meeting of the Committee on Solidarist Geophysics on Time-Dependent Earthquake Hazard, if I can scroll forward. I'm going to give a few introductory remarks about the art committee and its mission and then about the subject and our speakers for today. Our committee is part of the National Academies of Science, and the mission for the NAS is as a nation's preeminent source of experts, evidence-based and objective advice on science, engineering, and health matters, and the NAS informs policy with evidence, sparks progress and innovation, and confronts challenging issues for the benefit of society. The NAS importantly serves as a neutral convening body to provide guidance on program direction and priorities, help resolve scientific or science policy controversies, and also provide technical analyses and independent peer review, and informs science policy debates, and it builds and maintains scientific networks while also summarizing state of the science to audiences of varying technical knowledge, while increasing the ability, the visibility of emerging scientific fields and policy issues. We are the Committee on Solid Earth Geophysics, or COSAG, and we support the community discussion and community agency interactions, and we encourage understanding, review, and exchange regarding Earth structure, dynamics, and evolution. The committee fosters long-term efforts to collect, store, and disseminate related data and to monitor geodynamical events and nuclear testing treaties. All of our meeting resources, reports, and webinars are available online. Our membership and staff is listed here, Torsten Becker from UT Austin is COSAG Chair, Mark Bain from Boston College, Jeff Reimler from Michigan State University, Rengan Gawk from Lawrence Livermore National Lab, myself from the University of Oregon, Steve Nareem from the University of Colorado at Boulder, Donna Shillington from Northern Arizona University, and Jessica Warren from the University of Delaware. Our staff member is Deb Glickson, whose name should probably be in an even bigger font than all of us given her outsized importance. Our sponsors are the National Aeronautics and Space Administration, NASA, the National Science Foundation, NSF, the U.S. Department of Energy, DOE, and the U.S. Geological Survey, USGS. We meet frequently throughout the year on a wide variety of topics. Our most recent meeting was about how plates are made and preserved prior to that we discussed novel geophysical datasets for environmental applications, solid earth science, and sea level change, and tracking environmental changes due to COVID through remote sensing. And we also met about enhancing quantitative capacities for geoscience programs and on beyond the black box, the future of machine learning and data-intensive applications for solid earth geosciences. Now I'd like to talk briefly about our motivation for today's meeting before I turn it over to our experts who are going to lead us through the discussion. This very basic equation outlined here can in a way be thought to encode the relationship between earthquakes and society. And it says that the risk or the exposure to harm, danger, and loss that we're susceptible to is a combination of the hazard that we're going to focus on today. And that can be understood as pretty basically how frequently shaking or tsunamis, the hazardous phenomena, occur, but that needs to be measured against the exposure, which is a measure of the people or the assets in the impacted area and the vulnerability, which is the propensity for loss of life or damage to those assets. Perhaps an example helps to illustrate what we're talking about here. The Haiti earthquake in 2010 illuminates this, the hazard in the region around Port-au-Prince, shown here in this figure, I've sort of conceptually quantified as medium because even though it's right next to the Enriqueo plantain fault, the slip rates on that fault are relatively moderate, about 7 millimeters per year, so not nearly as fast as other systems. But the exposure is quite high with Port-au-Prince being a very densely populated city. And more importantly, the vulnerability is extremely high. As you can see from some of these pictures, a lot of masonry structures, which we know through concepts like fragility curves that are shown here are extremely susceptible to collapse even with moderate levels of shaking. As a result of that, the risk itself is very high or was very high and the losses were very high during that event. Because of these other two variables, the exposure and the vulnerability, even if the hazard itself might not have been perceived as high. So tragic as that event was, 200,000 lives were lost, it was not altogether unexpected. Here today, we're going to focus on one part of this equation, which is the hazard. And I want to talk a little bit about that by introducing first one of the common data products that are produced when people talk about hazard. Shown here is the 2018, the latest USGS national seismic hazard map, the colors and code, the perceived likelihood of damaging shaking in the next 100 years with red being very high, yellow a little less so and blue being low. These hazard maps are difficult to build and they require a mathematical formalism, in this case a probabilistic framework, to put together. And they symbolize a scientific synthesis of what is known for a region. In order to make these, we need to know where the faults are. We need to know their slip rates so that we know the rates of expected occurrence of earthquakes. We need to know about wave propagation phenomena or the ways being ducted through a slab or through a fault damage zone, for example. And we need to know about side effects that might amplify or attenuate the damaging shaking, such as basins or topographic effects. It's a very big effort to put a map like this one together. And because of it, and because of its significant uncertainties, one common simplifying assumption is to make the calculation time independent, in essence to say that the hazard does not change with time. Now of course, the relationship between earthquakes and society, not just the hazard, we know must change through time. The hazard is going to change through time, the exposure is going to change through time, the cities are built out, and the vulnerability is going to change through time as building codes change. Here today we're going to talk specifically about how to make hazard calculations time dependent. And there's many reasons why we would expect that seismic hazard would change with time. One of them is simply tectonics, century to millennia scale processes affect where earthquakes are likely. And shown here in this figure is simply how long has it been for a particular fault since its most recent earthquake. And the interpretation is that the longest, the longer times that have elapsed since the previous damaging earthquake, means that that fault is perhaps closer to failure and should then be considered more hazardous. Another reason why hazard necessarily changes can be because of shorter term stress changes. Shown here is a map of the likelihood of magnitude 3.5 in larger events following the rich crest 2019 magnitude 7.1 main shock. We would expect that the occurrence of that earthquake loads or unloads neighboring faults and means that the likelihood of other events should change as a result of that earthquake. There's also human influence. We know very well that fracking and particularly wastewater injection can change the rates of seismicity in certain regions. Shown here is for example the one year probability of damaging shaking especially for the state of Oklahoma which is far away from a plate boundary but whose hazard is perceived to be much higher for that particular year given human interaction with a critically stressed crust. So that's what we're going to talk about today. How can new scientific advances help to understand the nature of this time dependence and what are the mathematical formalisms that we can use to quantify it? It's a very thorny question and to try to address it we've convened a series of experts. Today's meeting will be conducted in two separate sessions. Here's the schedule for our first session. We will have a short panel discussion at the end of the session with our speakers and then we will take a short break. Following that we will have a second session where we will look more closely at losses and impacts and connections to risk and in the same way we will have a panel discussion with session two speakers at the end which will be followed by a longer panel discussion session with speakers from both sessions one and sessions two. For online attendees I'll remind you that you can ask questions through the Q&A feature of the webinar so navigate to the Q&A button at the bottom right of your screen and you can type in your question there and the moderators of each session will pose the questions to the speakers. So with that in mind I'd like to introduce our first set of speakers. I'm going to introduce Dr. Marco Pagani and Richard Styron from the Global Earthquake Model Foundation. Marco Pagani is the coordinator of the seismic hazard team at GEM. He has more than 20 years of experience in probabilistic seismic hazard analysis and seismic microzonation. Dr. Richard Styron is an active fault specialist for GEM and he runs the earth analysis, a research and consulting firm. His research mostly specializes on faulting, stress and lithospheric deformation. Marco. Actually I will be giving the talk so. Okay let me show my screen here. Okay can everyone see that? Okay great. So hi everyone thanks for coming. My name is Richard Styron and I'm a hazard scientist at the Global Earthquake Model Foundation or the GEM Foundation. Today I'm going to provide an overview of both the different phenomena and the different modeling approaches involved in time-dependent seismic hazard modeling. First I'll introduce seismic hazard modeling. So seismic hazard analysis is a quantitative method to compute the the values of ground shaking expected at some site during a given time interval or from a given event. At regional scales and larger where a broad range of earthquakes must be considered the state of practice is to use probabilistic seismic hazard analysis or PSHA. PSHA calculates the probability of exceeding some intensity of ground shaking in a given time interval. It integrates over all potential sources of engineering concern and incorporates uncertainties and parameters as well as the randomness or natural variation in the rate of earthquake production from those seismic sources. So there are two components to a PSHA model. The first is the seismic source characterization which defines the locations, the geometries, the rates and magnitudes of all potential earthquakes in the study area. There are a variety of source types that are used from very general spatial polygons or grids that produce distributed seismicity to specific faults to more complex sources that may represent subduction zone megathrusts or subducting slabs. The second component is the ground motion characterization which defines how the seismic energy released by each earthquake propagates and attenuates before reaching the sites where we compute the hazard. Because the impacts of time dependence on ground motion attenuation or minor we're going to concern ourselves primarily with the source characterization component. However one thing to note here that was not intuitive to a fault geologist like myself but is probably more obvious to engineers and seismologists is that hazard calculations are all performed at a site or a collection of sites that are geographical locations where someone may want to build a building or whatever accounting for seismic sources within tens or hundreds of kilometers. The calculations are not typically performed for individual faults or earthquakes without much regard for where the ground shaking would be felt or measured. The ground motion component has to be included in the hazard calculations. So here's a picture of the hazard curve which is the fundamental result of a PSHA at a single site. The intensity of ground shaking for example the peak ground acceleration is on the x-axis and on the y-axis is the rate or roughly the probability of ground motion intensity and the reciprocal of the exceedance rates on the y-axis are called the return periods which is roughly analogous to a hundred or a thousand year flood. So hazard curves always show these monotonic decreases in probability with increasing levels of ground shaking demonstrating that stronger ground shaking is less frequent than weaker ground shaking. Stronger ground shaking can be either from larger earthquakes or from closer earthquakes. So one other important thing to note is that different user groups are interested in different return periods in order to properly manage their risk. Typical buildings will only stand for a few decades and can't take huge engineering costs so they're designed around fairly frequent lower magnitudes of ground shaking. Larger important community buildings such as hospital schools and other civic infrastructure should withstand greater shaking dealing with stronger or closer events with thousands of year return periods while very critical facilities such as power plants may need to be engineered for more rare even stronger events. So there are three broad classes of the phenomena of earthquake occurrence and time dependence. So the first one that we're talking about is what's called quasi periodic occurrence of large earthquakes on a single source typically a single well known fault. The second are aftershocks and foreshocks which are very familiar concepts even if they can be quite tricky to define precisely and the third class is either clustered or triggered main shocks on nearby fault sources. So from a geophysical perspective these phenomena are likely linked but we don't fully understand all the science here yet. Major earthquakes often occur on faults and they're somewhat regularly spaced in time so particularly for large plate boundary faults and most well known are the big plate boundary strike slip faults such as the San Andreas or the Alpine fault in New Zealand and on these faults we see a fairly regular occurrence of major earthquakes that are relatively close in the same magnitude. So the plot on the upper right has the Alpine fault on top and we can see earthquake occurrence and diamonds through time and we can see that this thing ruptures almost like clockwork. On other faults such as the San Andreas and the Dead Sea transform there's a little bit more variation in their occurrence intervals but if you look at it statistically there's still quasi periodic. So from that quasi periodic model we can look at the rate of earthquake occurrence on that fault with time since the last event and so that is shown in the plot on the bottom and time since the last event normalized to the mean recurrence interval is shown on the x-axis and the rate of earthquake occurrence given that it has not yet occurred is shown on the y-axis and so the most important thing is we can see that following a major event the likelihood of earthquake occurrence drops pretty much zero and then starts climbing up until you get to about the mean recurrence. After that what happens is really dependent on the statistical model used and there's a lot of variation and that does affect the hazard pretty dramatically but we don't really have a good ways of identifying this within the kind of limited paleo seismic data sets that we have. So something important about about quasi periodic earthquakes is that they're tied to it what we call characteristic earthquakes or characteristic faults where a single fault or fault segment will rupture in these large events of relatively consistent magnitude that releases all of the accumulated stress or strain on the fault and then there must be some period of tectonic reloading in order for the fault to rupture again. In order for this to be something that we can use in modeling it means that these faults must have persistent segmentation so so ruptures on these faults must be relatively contained within each fault segment and those segments act somewhat independently so and and let me say that that's that's somewhat contentious behavior so so next we can move on to aftershocks these are the most well observed time dependent earthquake behavior so aftershock rates decrease in the minutes to the decades following a main shock this is usually described with what we call Omori's law and furthermore aftershock magnitudes tend to be smaller than the main shock magnitude in general the largest aftershock is is about an order of magnitude smaller than the main shock although there's variation here too because the magnitudes of aftershocks tend to be so much smaller than the main shock and are so close to it aftershocks are often ignored in long-term PSHA but can be treated in in short-term PSHA following large events. Finally the last phenomena that I think is the most poorly understood is clustered main shocks and so here we have nearby faults that rupture relatively close in time with respect to the mean inter-event times of these earthquakes and these are well described in instrumental and paleo seismic catalogs statistically they're unlikely to be random and they're probably evidence of earthquakes triggering each other through stress interaction but there are multiple physical mechanisms of stress interaction that are suspected and importantly these mechanisms are non-exclusive so it doesn't really mean one or the other so a proper model must account for all of them in their proper balance so now we're going to talk about the the different modeling strategies involved in large-scale PSHA so one thing I want to note first is that the approaches are mostly at the research level they're not common components of national-scale seismic hazard analysis yet so my employer the gem foundation is a nonprofit that collects and produces seismic hazard models worldwide and we have a global hazard mosaic that represents the state of the art regional to national-scale models produced by the top institutions public institutions globally so of the 31 models making up the global hazard mosaic only two that are have time-dependent components and this is japan and then the terminus us there's a third major model the usur three time-dependent model for california made by the us geological survey that we have not implemented the time-dependent independent excuse me we have not implemented the time-dependent version we've implemented the time-dependent version within the gem mosaic although this will likely change in the future so within the types of time-dependent PSHA that we see there's multiple modeling strategies for different phenomena the first is aftershock PSHA then we can deal with some clustered main shocks as well as periodic earthquakes and characteristic sources and then finally there's evolutionary or interactive models that describe how faults and earthquakes interact with each other and how the system evolves with time so aftershock PSHA is typically done conditional on the occurrence of an earthquake main shock immediately following an earthquake this is often called operational earthquake forecasting and so we see elevated levels of seismic hazard in the in the days to decades after the event and this hazard decreases with time commensurate with a more decay of earthquake rates and eventually it settles down to the mean rate so there is some statistical methods in development to incorporate these international scale PSHA so for example there's the epidemic type aftershock sequences that morgan page who's the next speaker as well as others have been developing for the for the us gem also has some experimental efforts and incorporating this into classical PSHA but i'm not going to talk in detail about that so the next time dependent model is what are called cluster models and this is where we have groups of several earthquakes that are going to happen in quick succession that are relatively closely spaced in time and they operate as kind of a plump that can happen anytime in a time dependent framework but all of the individual earthquakes within a walk will occur in that plump and and so this is implemented in the for the new madrid earthquakes in the central mississippi river valley following the 1811 1812 events and there's also room for adding pretty complex behavior for earthquake clustering that it can include multiple mutually exclusive rupture combinations for example you can have a seduction zone that can rupture in either one magnitude nine event or multiple closely spaced magnitude eight events but not both within that time dependent framework so cluster models are interesting compared to time and dependent hazard the ground motion varies between the sort of in-member models at different return periods so this is a plot from from boyd 2012 and the red curve represents the the three models from new madrid if they're uh excuse me the three earthquakes in new madrid if they're treated independently and the black curve represents one single event from that cluster then the blue curve represents the cluster model so at at higher frequencies of exceedance or lower ground motions the clustered model is very similar to to one single event so the likelihood of ground shaking from those is is is not different from all of the events compared to one event but as you get to less frequent events and stronger ground shaking the cluster starts to become asymptotic towards considering all of the events independently so this just highlights different use cases are going to see different effects at time dependence from the same set of sources this and the same model the japan model has implemented time dependent characteristic ruptures for much of its seismic sources so there are 233 faults in japan that are time independent and 123 faults that are time dependent following a brownie and passage time model and these have characteristic earthquakes so so the the entire fault source ruptures at once in large magnitude events um the subduction source is also segmented and there are time dependent mutually exclusive ruptures for the largest events but not for the smallest events the kind of you know magnitude fives and sixes to happen uh you know very very regularly in japan so if we look at these characteristic sources if we look at a single source and the hazard predicted uh from it using the brownie and passage time model which is the most common formulation for time independent or time dependent characteristic sources um we can compare that to a time independent or poisson model so the time independent model is shown here as a dash black line and we have um uh time dependent hazard curves shown um at various times since the last earthquake so the most recent is uh or most recent following the last main shock is shown here in blue and um we see that time uh excuse me we see that the hazard following the last earthquake is very low and um the poisson model or time independent model greatly overestimates the hazard but with increasing time since the last event eventually the hazard that we would expect um exceeds the time independent model and um and so we're underestimating it by using time independence or by using time independence so also these effects are magnified at the lower um ground shaking intensities or the the higher frequencies of the exceedance then they are at the high uh uh ground shaking side of this however this you know so this is what you get for a single source there's been some recent research looking at collections of time dependent sources that interact and that's not what we see so there's a lot of variation in the expected behavior here there are also a lot of phenomenological problems with characteristic ruptures as well as modeling problems so um modern and paleo seismic observations support that um there's varying rupture behavior on different segments so the top plot is from duras 2016 and this shows the wasatch fault and when we look at the paleo seismic behavior we can see that sometimes you have sub-segment rupture behavior and sometimes you have ruptures that that go from one sub-segment to the next sub-segment crossing as a segment boundary but don't rupture any entire segment um we've also seen plenty of multi-segment ruptures and even multi-fault ruptures um such as the kaikura earthquake in new zealand which is shown here to the south which had an enormous array of different faults that all ruptured in the same earthquake um and and so we know that characteristic ruptures and quasi periodic faults can't be the most accurate um account of seismicity on crustal faults um there's also modeling issues with this so if you have a small or moderate magnitude sub-segment earthquake does this reset the clock um fully or maybe just partially for that segment how do you consider that what happens if you have multi-segment ruptures where you have a lot more slip than you would expect during a sub-segment rupture should we be using a different variable other than time to track the state of the system um shear stress is is um nice from a fault mechanic standpoint but incredibly difficult to measure elastic strain accumulated or potential strain is a bit easier to measure but but a little bit harder to relate to fault mechanics so um there's a lot of work to be done kind of on this front figuring out the best approach if we'll just considering the sub-segment ruptures and floating ruptures um where you have a small earthquake that can occur anywhere on a larger fault surface um we're basically forced to discretize the fault surface into into a lot of small units so therefore we need um at least one state variable for every one of those segments in order to describe the time uh dependence of the system um so when you're dealing with a national model that has hundreds or a thousand faults um and you break them up into a lot of little pieces you start to really increase the size and computational expense of the model finally if we want to get into earthquake triggering and rupture interaction um the model uh complexity just really starts to explode so earthquake interaction rupture interaction mean rupture dependence so we can no longer treat all of the sources as these independently operating events um excuse me independently operating um sources and ruptures um we have to consider dependence and so therefore just in terms of um the the basic variables we'd use for representing the rates or the states we would go from a single basically vector of independent variables for each event um to a matrix where we have have more or less squared that and um to account for all of the interaction and um you know so if you have a rupture or if you have an earthquake um model that has a million ruptures which is which is pretty common you know all of a sudden you can be easily dealing with a million squared terms so so that becomes really complex to deal with and we tend to not just uh track all of the states at once as different variables um furthermore the physics and statistics of this interaction is still unknown and it and it can change over time it should change over time even in the absence of continued seismicity so um uh the plot on top shows the um the evolution of stresses from viscoelasticity in the um North Anatolian fault following the Izmit and Duce earthquakes so so even though there's only two earthquakes in here the system continues to evolve through time okay thank you so because we can't easily integrate over all of these sources in all of the states numerically we have to use simulation models so um so simulations can account for model evolution but you know there's simply sand samples from from a large distribution of of uh possible evolution paths um the epidemic type aftershock sequence is the most commonly used model here and uh that morgan will talk about uh in detail I assume so this is a recursive model so each earthquake produces a sequence of aftershocks and um those aftershocks or at least the larger ones are capable of producing their own aftershocks and so we have this um you know potential spread of um of seismicity and I think after the past few years we all know why we call this an epidemic type uh model um uh the rupture interaction and triggering um within a typical ETAS model or defined statistically not physically um although the usur 3 ETAS model for california couples um ETAS triggering with elastic rebound um that simulates the the tectonic re-accumulation of tectonic stresses and quasi periodic behavior for large faults physics based models are also you know a research topic but they're not widely used in PSHA they can incorporate physical concepts such as rate and state friction where where fault strength varies as a function of time um as well as elastic coulomb stress effects and even visco elastic and um uh afterslip effects these are very computationally demanding especially dealing with uh visco elasticity and afterslip where you more or less have to use finite element models representing a huge volume of crust and upper mantle instead of simple elastic techniques um and uh and so so they really require huge resources in order to operate a large scale there's lots of different rooms for combinations of of physics and statistics that they hit different um sort of realism to efficiency um uh goals here so to summarize this there's a broad range of time dependent earthquake behaviors that we see phenomenologically we have multiple modeling strategies to deal with these different phenomena but it's incorporate it's uh i think important to note that the physics isn't well incorporated into the models and it's also not fully understood yet so there's a lot of room for research here and um the few examples we've seen i think indicate that until we have time dependent PSHA that's widely implemented and vetted against all the observations we can bear we may not really understand how seismic hazard is impacted by time dependence okay so thank you thank you richard um we're going to move on to the next talk and we'll circle back during the panel discussion for um questions now i'd like to introduce a dr morgan page she is a research geophysicist at the u.s geological survey where she is interested in not surprisingly probabilistic hazard analysis and more broadly inverse problems in seismology with a research focus on statistical um issues including rigorously incorporating models and their uncertainties into the hazard analysis uh quantifying their uncertainties as well as kinematic inversions and analyzing non-stationaries in earthquake catalogs morgan please thank you all right so i'm going to be talking about um seismic hazard from the perspective of a developer of the usurp model in california so um this model which was alluded to in the previous talk is oh it's not letting me change my slides oh there we go um um uses a variety of different data to come up with a composite forecast for california and this model is currently being expanded to um cover the entire western u.s and will form the western u.s component of the national seismic hazard maps in the future so this model combines geodetic information geologic information to determine the rate at which different faults in california are moving and also it uses paleos seismic data so places where we've dug up from say a thousand years of earthquake history in certain spots in california to determine the rates of prehistoric earthquakes it also uses um seismicity so where um recent earthquakes have occurred combined to make a composite forecast that is used to set um building codes in california and insurance rates so um we have an inversion approach that solves for the rate of ruptures on this um complex fault system that i'm showing here and i'm not going to go into the details of the inversion but the main thing is it takes up all the varieties of data i just described and comes up with how often each of the different earthquakes on this fault system occur in a way that's consistent with that data so it actually comes up with uh we have a simulated kneeling inversion that comes up with multiple models that can fit the data because it is an underdetermined problem um we have on this fault network um about 250 000 possible earthquake ruptures above about magnitude 6.5 um that have merely different ways of linking up different faults and many different magnitudes in um nucleating in different locations so this is a huge change from um the way seismic hazard was done in the past in um california just showing a you know 20 year old model for the bay area showing faults very strictly segmented on the left by comparison the usurp model for um uh california is shown on the right just showing on the top panel on the right just all the different um faults that can link up with one section of the hayward fault in the bay area so it's most likely of course to link up with the faults close but you know every hundred thousand years or so according to our model it can link up all the way with the southeastern adria spot and the senescent of fault in southern california and then there are thousands of different earthquakes that can potentially include rupture on this section of the fault shown in the bottom flat just a handful of them colored by their rate so it's a much more complex model but it's our hope that it better approximate the true complexity of the earth given the fault network that we have in california that can link in many different ways um so in the previous talk um uh the japanese hazard maps were mentioned i'm showing here a very rather old hazard map that was had very strict segmentation along seduction zones that was developed before the 2011 hazard map which unfortunately ignored the segment boundaries that were drawn and linked up many segments to make a more devastating and larger earthquake than could have happened if those sections had ruptured independently so this is the kind of thing we want to avoid by including um multifault ruptures in our model we know that multifault ruptures can happen commonly perhaps even for bigger earthquakes even more commonly than single fault ruptures it seems given the recent earthquakes that have happened in california and elsewhere in the world of course in southern california in the 90s we had the landers earthquake which linked up four different faults to form a larger magnitude earthquake that could happen if they ruptured independently so we include earthquakes like the landers earthquake in our model and this connectivity this linking up of faults really affects the final hazard that our model gives um so here's for example i'm showing um the kukumunga fault section shown there inside the circle on the right this is a fault section of the sierra madre system that's north of samper and dino in southern california and the earlier version of the usurp model that was uh more segmented this fault could only rupture by itself um and that meant that you couldn't get an earthquake as you see on the left bigger than about magnitude 6.9 on that fault but in the new version the newer version of usurp it can rupture with all the colored faults shown on the left so i'm not showing an earthquake there i'm just showing different faults that can rupture in tandem with this earthquake with this with this fault in one earthquake or another colored by the rate at which they rupture together and you can see if you look at the usurp three magnitude distribution over here on the left it um has a much higher uh roll off you can have earthquakes up to magnitude 8 although they're not very common but the important thing is is because um both of these models are slippery balance meaning the sum of all these ruptures has to if you could slip in there if you integrate over the slip in each rupture and how often they occur has to be consistent with the overall long-term slip rate of this fault section so if you allow the kukumunga fault to rupture together with more faults you essentially can get rid of take care of a lot of that slip rate in larger earthquakes and you don't need as many moderate-sized magnitude six and a half-ish earthquakes so this change actually counter-intuitively allowing larger earthquakes in this case lowers the hazard near San Bernardino because even though there are bigger earthquakes they happen less often and on the whole for most structures this is a lower lower level of hazard so connectivity is very important for the end results of the hazard model and in the more recent versions of usurp we've expanded connectivity at the higher end as well so we'll actually allow some ruptures to go through the creeping section at a fairly low rate but that it can happen even though there is a lot of creep in this area so this links up northern and southern California so we can have um much bigger earthquakes possible in the new version of the model than we're estimating the previous version of the model in addition the garlach fault could link up with the San Andreas and the different faults throughout the Ventura and LA basins can link up and the San Andreas can link up with the San Jacinto faults and this led to an approximate doubling of the rate of nine to eight earthquakes which certainly generated a lot of attention and really affects um basically the um the tail end of that hazard distribution and what the worst events predicted by the model might look like they would affect a much larger area so we're now again um working toward improving this for our next version of usurp that's currently underway being developed um here on the left is the usurp 3 fault system and the maximum length rupture that each point can be can rupture with so you can see the San Andreas system is most connected is the most well connected fault here in this network now we've been um Kevin Milner and others have been working to um basically make better rules for what ruptures can occur on this on this fault system um in the previous model many of these rules were very ad hoc you know we had loud ruptures to jump about five kilometers or so but it didn't consider how the ruptures themselves were oriented which is important as to as to how one fault might stress another either favorably or unfavorably so with our new rupture we're using insight derived from a physics-based simulator that uses um too long stress interactions to determine what types of rupture jumps are favored and which are disfavored and so we're hoping that this new um fault rupture model will be more um we'll have more physics in it and be more consistent with what might might actually happen in nature so that's how we developed the fault based portion of the model shown here in the middle and we um combine all of those different types of data to determine um the rates so how often different earthquakes on this fault system occur but we know of course not all earthquakes will occur on the known faults or the mapped faults included in our model there are a lot of um small character poorly characterized structures our structures we just simply don't know about prior to the earthquake happening and so we also include um back we call this background earthquakes from derived from seismicity and we combine the two to get a total model that includes both earthquakes on and off the known model faults now I want to start here with a note of caution the first is that the this model in California really partitions what we call on fault and off fault here off fault is really just off the known faults there those earthquakes are also on faults um but this dichotomy between these two parts the model is really artificial and something we certainly don't think that nature honors we expect fully expect in the future there'll be earthquakes like the Keck per earthquake in New Zealand that occur partially on faults that we know about and partially on faults we didn't know about or weren't modeling directly and my other point of caution here is that what we're using to fill in the off fault portion of the model is derived from seismicity from historical earthquakes and instrumentally reported earthquakes so for largest earthquakes seismicity goes back to say 1800s and for the smaller earthquakes it goes back to um the instrumental era so since the 30s or so when we had seismometers detecting these earthquakes and this is really a shorter term measurement than the fault-based measurement um do the slip rates the geodetic measurements and geologic measurements that go into the math faults of the inversion so we're potentially combining things that two data sets that are perhaps relevant at different scales and I'm going to come back to this later in the second half of the talk because I think this is potentially an issue so that's the basic building blocks of the time independent version of the model from there we add two layers of time dependence to make the final fully time dependent version of the model which is usurped three ethos so the first thing we add which was alluded to in the previous talk is elastic rebound along the faults even though our model is an unsegmented model where there are many different um styles of linking up faults in different ways we have a new method we developed during the usurped three process that allows us to apply elastic rebound meaning after a big earthquake on a fault it's less likely to have that fault immediately re rupture immediately after that big earthquake and then that probability will increase in time but even if the earthquakes are partially overlapping in our methodology that probability is just is slightly decreased after a big earthquake depending on the amount of overlap our next level is applying um aftershock clustering we use the um epidemic type aftershock sequence model to to add this and this type of modeling uses the four main um scaling laws that um in in seismology the first is puten for rifter size scaling basically most earthquakes are small fewer are big and this follows an exponential distribution so this applies to aftershocks as well as all earthquakes um the next is more a decay that describes how the rate of aftershocks decays in time we also use the fact that big earthquakes big main shocks trigger more aftershocks than smaller ones and uh the rate of which aftershocks decay in space so most aftershocks that were close to their main shock and there's a parallel decay so in this epidemic type modeling we treat aftershocks like a contagion where they can each affect other faults and therefore trigger aftershocks and those aftershocks can go on to trigger aftershocks of their own in cascading process many generations of aftershocks are possible you know it's very similar to epidemic modeling of say COVID-19 um we have a parameter of branching ratio that's much like the epidemiological are not if it's less than one the aftershock sequence eventually dies out which is typically what happens um there's a small chance that any given earthquake will trigger a main trigger and aftershock bigger than itself in this case we rename the initial earthquake a foreshock and call this aftershock the main shock but it's the same process that follows the same statistical scaling loss as all the other triggering in the model the foreshock is really just a main shock whose aftershock is bigger than itself so we use this um in our model and this is the only model I'm aware of in the world that puts an utas model with elastic rebound and with with faults um utas modeling has been common for several decades but we're putting it on faults and we can get fault we can apply it to fault-based sources so what that means is say after an earthquake like um the the ridge crest earthquakes here on these two cross faults that occurred a few years ago here are the primary aftershocks that are triggered so I'm just showing here the epicenters and here is all generation so not just those primary aftershock epicenters but they're they're they're aftershocks that those aftershocks triggered and further generations down and here you can see that we can really light up the faults so some of these aftershocks occurred on the garlic fault and then aftershocks of those large earthquakes are the triggered smaller aftershocks which are the reason that the spot lights up and some and less likely but it could even in some cases trigger the san Andreas or the panaman valley system so we can get using utas modeling many this is the average of 100 000 simulations I'm showing here and we can mine those simulations for statistics as to how likely certain outcomes would be so for example here's a 50th percentile catalog where we resort the catalogs based on the total number of aftershocks triggered so this is a typical scenario you might expect after the ridge crest sequence where approximately one magnitude six earthquake is triggered the 90th percentile event has um 90th percentile catalog has three magnitude six triggered and of course you can continue mining you could even I could even show for example the worst of the 100 000 simulations is a doomsday scenario where the ridge crest starts off the garlic fault which then triggers the san andreas fault and the various faults in the la inventory basis so not very likely this is the very worst catalog of all of them but we can get an idea of not just sorry not just um what the typical situation is following a ridge crest like earthquake but also what the tails of that distribution might look like statistically so one worry of mine is that in these types of models were as I mentioned earlier we're combining data at different time scales and that really influences how the etops model in usurph behaves so for example here I'm showing various faults in southern california compared to epicenters of recent seismicity from the last few decades and also in southern california this fault here is a san andreas fault it's the fastest moving fault in this system based on that you might expect it to have the most earthquakes but there are parts of the san andreas fault that I've highlighted here in these ovals that actually have very few earthquakes currently and you could interpret that in one of two ways you can interpret it as well right now the san andreas is low at all magnitudes small and large or you could interpret it as the san andreas is a different type of fault that only hosts large earthquakes rather than small earthquakes um the usurph model assumes the second and because of that the etos triggering behaves very differently than sort of a vanilla basic etos model would so here considering I'm showing the um the um aftershocks the distribution of aftershock sizes that would be triggered by two magnitude four earthquakes one on the san andreas fault in the relatively quiet area and the other on a san centifall which is has much more activity these faults have similar slip rates over the long term but because um the seismicity rates are so different the model produces very different size distributions for what um and what and in different predictions of what aftershocks these different main shocks might trigger so you might think a typical etos model would say any magnitude four size earthquake no matter where it is is going to have a similar propensity to trigger um a given number of aftershocks the usurph says no if you trigger if you have a potential foreshock somewhere that's very quiet but has a very large rate of say magnitude seven events that's a case where you should be more worried so this comes back to the characteristic earthquake hypothesis which again points to a mismatch between the uh the the recent rate of earthquakes defined from seismicity and the longer term rate of earthquakes from paleos seismology um i'm i'm here in felser have argued in our paper that this could also be simply a change in rate um so it's either a break in scaling or a change in rate and the community has really undecided as to which it is but it makes a huge difference say an order of magnitude difference in this case for our estimate of what the triggering potential is of a moderate size earthquake so it's a very important thing to figure out to what extent is this a break in scaling or is it simply a change in rate so again our goal is to conclude is to produce these fault-based forecasts that we currently have for california and the entire western u.s for all these faults shown and we want to go all the way to last estimates so um as i showed with the u.s simulations following break crest we can get more than just say a typical or mean scenario following say a uh a main shock we can actually give a whole distribution of possibilities here's showing say a typical year of um insured losses in california might look like so the way to read this is there's a one percent chance of 30 billion dollars of insured losses but the magnitude seven earthquake has just happened on the san angeles fault there's a one in 10 chance of 30 billion dollars losses so these tail risks get much higher and these are really important to quantify precisely for things like insurance companies that have to take out um reinsurance to protect themselves from these kinds of losses so i'm sure in the next talk matt will have more examples of ways in which these probability gains are actionable but they can be and quantifying the time-dependent hazard accurately is very important for applications like this thanks morgan that was fantastic we're going to move on to our next speaker uh who's matt gerstenberger he's a seismologist and principal scientist at gns science in new zealand where he focuses on earthquake forecasting and seismic hazard modeling with a particular interest in understanding and quantifying uncertainties developing testable models and actually testing them and also developing methods for propagate propagating uncertainties he has actively worked in seismology in the u.s japan uh europe and australia and is now leading the development of the new zealand national seismic hazard model matt thanks very much dago get my screen up here hey uh good morning everyone my talk will be a little bit different than the previous two i will um i'm not going to get into any of the details of the models themselves but i'll talk about some of the modeling results that we've we've done in the last 10 or 15 years and how those have been used in various decision making around zealand um so first um the last 20 years in new zealand has been um pretty busy in terms of earthquake activity um and i guess this this is what we've seen in the last 20 years and you contrast this to the roughly the previous 50 years to this where there was one moderately damaging earthquake and in this case most of these caused at least significant shaking and quite a few of these were actually damaging earthquakes and when i was putting these slides together um i wanted to identify which earthquakes had occurred in clusters and i put those into red and then i quickly realized that almost all of these were in some sort of cluster just two that were in black hair warts weren't part of a cluster and this actually this this black one is part of a longer term cluster um that started before this um and this kind of this clustering underlines a lot of what i want to talk about today that we see in new zealand a lot of this long-term clustering where we get big interactions of bigger earthquakes on kind of uh years and multi-decadal scales so the the type of modeling that we've done that we've done in new zealand there's been four basic types some of these are more or less already been introduced um but slightly different take on that the first we start out with just kind of statistical models of of earthquake occurrence and these these really just give the magnitude the rate and the location they don't talk anything about shaking but then the next layer of information that we can put on that is then going into the the hazard and ground motion forecast so that's where we add in the shaking and then the next layer we put on to that on top of that is when we start to add into impact so what happens when these earthquakes occur and then the final one is scenarios and what this is is this is kind of more subjective information so when you take these this is giving examples of what maybe these might look like so what we might expect to occur should one of these particular earthquakes that we've looked at happen so what i want to talk about today is mostly in in this space really focused on the ground shaking forecast but getting a little bit into the the risk and also into the statistical models so the we've we've been doing this for the last 10 or 15 years or 15 or 20 years i guess putting out this type of information and we've done it really in the short medium term and long term concepts i'll talk about this a little bit more but that's kind of underlying everything that we're doing and this is just kind of a list of some of the main decision making that has been impacted by decisions at different scales so starting say with buildings inspect and building inspections and so on that's been kind of where things have started to happen at the short term immediately following when a big event occurs they use this kind of information to help decisions around that then various other decisions as you get into kind of longer term forecasts getting out into some kind of very substantial long term decisions impacting buildings around New Zealand and i'll give a couple of examples of those related to Kaikoura earthquake which is mentioned and also the Canterbury earthquake sequence this is all i'll say about the modeling but we really as i've mentioned we talk about these these three different timescales quite a bit first the aftersocks which everybody may have been introduced this is an image of Fusakichi Omori in 1890 who was the one who really came up with this first notice this particular behavior of aftershock sequences and this is into the medium term clustering so this is kind of what i showed in that first slide the term clustering this is work that was first identified by Frank Everson and David Rhodes back in the 1980s and we still use a lot of those basic scaling ideas that they came up with them they've been developed a little bit since then but that's mainly where things state and then we bring that all together with long term time independent side forecast and i actually i think i've already but long term and time independent often get used interchangeably and that's not that's that's can be a bit confusing that's maybe not necessarily what we mean and some important aspects of this is in these time independent forecasts the earthquakes are always assumed to be independent of one another in space and time so very different than these two aspects and certainly in New Zealand as i've shown in that first slide we have strong evidence that the rate variability that we get is much greater than what you would expect from the simple plus-on models that are used for these time independent forecasts and i'll talk a bit about that so first starting out with the simple models these are the ones at the top the statistical models these are just simple forecasts that we we can put out based on the aftershock modeling or the the combination of all modelings immediately following some sort of main shock and this is one that we actually put out i'm i think pretty soon after the the 2016 magnitude 7.8 picota earthquake and it just gives basic information of say in the magnitude 5 to 5.9 in the next month there's a 73 percent probability that we'll have one or more magnitude 5.9 and a reasonable range would be zero to seven so quite broad ranges on that and then it just goes all the way up to magnitude seven or greater and we can do this for different time periods and the next level of information that we add on top of this is then the shaking and this is just to give kind of additional information for other types of decision making that might happen and this is just one example of shaking maps that we might produce and this is again for that that same forecast period i think is a previous one and this is a particularly 30-day forecast of shaking from that and this is this is this figure shows more or less the entire aftershock region the main shock was was down here but it ruptured all the way up into here and aftershocks occurred all over the place and i'll put in context of Wellington so where i'm sitting right now the capital city of New Zealand is right here and it's quite you can see it's quite far it's right here on this one it's quite far from where the main shaking or where the main main shock occurred but there was actually significant damage in Wellington quite a few buildings had to be brought down from this so the context of the in this main population center in this figure so the context of these figures was the show what is the probability of experiencing that same level of shaking that we had in Wellington during the main shock again in the next 30 days and we did this for different time periods in this show so in Wellington that probability was what three to five percent which might seem pretty low but that's three to five percent in 30 days and that's much significantly higher than it was prior to the occurrence of this headquarter earthquake and you can see when you get closer to the where the main shock actually occurred those probabilities of having that Wellington level of shaking again were very high at much at more than 50 percent so then we wanted there are lots of decisions being made what things needed to do lots of buildings have come down and they're needed to come down and so on lots of buildings needed to be inspected and in New Zealand there's been big efforts in in recent years to retrofit buildings that were determined to need it and just prior to the Kaikōda earthquake there was a new act put in called the earthquake prone building act is the language used here for that and for that buildings were compared to kind of the relative the new building so older buildings are compared to the new building standard and if the shaking that they were determined to be able to withstand was roughly 33 percent or less of what a new building should be able to withstand and they were considered to be unsafe and they needed retrofitting and say in Wellington they were given about 10 years for that retrofitting to be done so we just produced these plots that showed okay what is the probability in Wellington for a building to experience shaking that is greater than this 33 percent level that the building those particular buildings were determined to be able to stand and again this is looking at a three month time period for this one sorry and this shows that in that three months there's a five percent probability of receiving shaking that is likely to collapse the building so that's it's a low probability it's actually quite high when you think about that the probability of collapsing the building and this just shows a relative risk compared prior to the Kaikōda earthquake so this is showing that it's eight times higher for Wellington at this particular point in time so the next step was then to we take that from Wellington to take it to the entire region this is this is actually a year later now so this is showing over the entire Kaikōda aftershock region the relative increase in probability of exceeding that 33 percent threshold and shaking and you can see in Wellington it's down to one percent it's it's it's it's fairly low at that point but it's up to 10 to 15 times in the main aftershock region and again this is the probability of of exceed shaking exceeding what might bring the buildings particular unreinforcementary buildings down so based on this this type of information or this this information here the government and decided to take that 10-year mandatory retrofit time and shorten that to one year so all these buildings didn't suddenly had only one year to get this retrofitting done the government this is over the entire region the government kicked in money to do that and they actually had a hundred percent success in getting those retrofits done okay so that was those I've talked about some short-term forecast a medium-term shaking forecast now I'll talk a little bit about the long-term forecast so this is following the Canterbury earthquake sequence so that that was a sequence that's still going on going today but that started in 2010 and between 2010 and 2012 there were multiple strong shaking events in Christchurch that collapsed buildings 180 people died and there is an expectation of that that shaking continue continuing so using the the PSHA methods that have been talked about already this morning we created these long-term hazard maps that included the short-term medium-term and long-term clustering type ideas and this is just an example of one of the maps this shows the peak ground acceleration with a 10 probability of being exceeded in 50 years and that's a very fairly typical product that's used for building code related decisions in New Zealand and the figure doesn't there's not a lot of detail here but it's more or less showing this is the aftershock zone and if we had the same figure from prior to the the occurrence of the the start initiation of the sequence then this whole region would have been similar to this background color here which was considered to be a moderate or low hazard area in New Zealand so it wasn't total surprise that these happened here okay so then to use that actually for decision making one of the first places that was used is to determine so many buildings came down or had to come down in the Christchurch central business district because of the the multiple events that occurred so there's a huge amount of construction going on so the question was do do the new buildings need to be built to a higher standard than they were prior to the Canterbury earthquake sequence so they used the shaking information as kind of the basic information for that so as I mentioned it's this 10 in 50 year probability of exceedance of a certain ground shaking that is used for that and this is just showing a figure of how that particular value changes through time in this time dependent model so it's saying kind of in the first year after this particular model was done that probability of that shaking level was extremely high and then that shaking level decreases through time till it gets kind of down to some more static background level after a number of years and it's this this information that was then used to revise the the the shaking requirements for new buildings in Christchurch and this is what it was prior to the occurrence of the sequence and it was revised to this level so that's the kind of the relative amount of shaking that the buildings needed to be withstand so it was about a 35 increase which is quite significant and the amount of shaking that these new buildings needed to be needed to be built to and this is still going today so some other things that were used that this modeling was used for was in the in this ongoing sequence there was there was significant amount of liquefaction in the Christchurch area and this liquefaction caused a lot of land damage a lot of structural damage and there's decisions needed to be made about whether or not buildings should be in certain locations given that we actually knew that in the 1800s so these these red zone areas here those those are where significant liquefaction occurred but we know in the 1800s all these areas had significant liquefaction that would have done exactly the same thing so there are questions about whether or not they should really be rebuilding in these particular zones so based on the shaking information plus other things the decision was made that these particular areas which were fairly heavily populated residential areas should no longer people shouldn't be living in them the risk of having to redo all the building was just too too high so those were essentially evacuated and I think that they're mostly free they've become parks essentially at this point this the second part was rockfall so this is an old extinct volcano here right next to Christchurch and there was significant rockfall events during multiple shaking events from in this earthquake sequence so we had boulders the size of sofas that went through people's houses so the decision making here I don't have time to go into but it was a little different this actually got into some very much more risk-based decision making and that actually looked at fatality risk to the probability of loss of life and if that exceeded some particular threshold then it was kind of based on those kind of commonly used thresholds of I think it was in the negative six the probability of loss of life it was determined that actually these these red zone areas here should also be vacated due to to rockfall or landslide and so on okay now I'm just switch gears a little bit and kind of what I've mentioned at the beginning and talk about kind of another form of time time dependence I'm thinking about this long-term rate and current PSHA methods that have been introduced so far recurrent kind of or depend on kind of the following assumptions and the first is that earthquake rate is Poissonian so that means that the earthquake rate is constant in time so it's the same over these 300 years as it was the 300 years before that and so on and it doesn't really vary a lot during that time or within some some can very constrained variability and also a key one is that the longest time length of the earthquake catalog or whatever dataset that we have available to constrain the hazard is representative of the long-term rate I won't go into the second one as much but this is actually a pretty critical one in terms of understanding the uncertainty around that that's a that's a pretty big assumption okay so this slide is pretty complicated but there's just a few key messages here that hopefully I can communicate well and the really the key question that we wanted to look at here is how variable is the rate from one time period to the next time period so we've looked at a number of places around the world I'll show New Zealand and Japan and just a way that this works say if we look at 50 here we went and we grabbed 50 events from the catalog we looked at however long that occurred say that was 18 months that those 50 earthquakes make greater than magnitude 4 occurred we looked then at the next 18 months so the same time length period and count the number of events that occurred in that time period and we plot that as one point on here so if we had 50 in the first time period and we had 50 in the second time period that falls right here in the center sorry this is the log scale I should have put the other one shows the log ratio so this is at one and zero sorry that means they were the same and at one that means you had 10 times more in the second time period in a negative one that means you had 10 times fewer in the second in the second period so so you so this would mean you either had fifth here you had fifth in the first and 500 in a second or five in the second so it's showing there's quite significant variability in that and I guess the important aspect the important part of this is here are the 99 confidence bounds on our observations so I know there's there's a significant amount of variability and the plus on assumption which is a basis of pretty much all hazard calculations is down here so it's showing we have almost an order of magnitude greater variability in the observations than we use in kind of typical PSHA calculations and this becomes particularly important when we're looking at low seismicity areas and it's also not the same everywhere around the world and maybe in California and Italy we don't see quite this strong variation that we do see in New Zealand and Japan so we're currently looking at ways to get this into the hazard modeling and considering alternatives to the Poisson assumptions so this is my last slide and I think I'll just sum up the top two points here and I guess the the key thing is we can do these time dependent hazard calculators we can put them out in lots of different ways but to be used then the users need to be ready to use them and a couple of challenges that we've run into particularly we've done these long-term forecasts for Christchurch and for Kaikoura and for those those to be used and the end users may use certain models that say okay if we get this level of shaking what's going to happen to a particular building and those buildings assume those models sorry assume that they're only going to get one earthquake and that one earthquake may or may not collapse the event or collapse the building sorry but in these time dependent models we might actually get multiple earthquakes that are capable of collapsing a building so for example you might get in a particular loss model of financial damage you might get a particular building collapsing multiple times which obviously doesn't make sense if you're talking about a short time period so you're inflating your losses so your loss estimates won't be very won't be very good they'll be over over too high so there's some work that needs to be done not just on the the development of the time dependent hazard models but how they're how they're used and I think I will stop there thanks very much thanks Matt um that was great I think it's time to move on uh to the panel session with uh Morgan and um and Richard and Marco and uh I'll I'll ask the committee uh first if they have any questions that maybe I'll have used the power of the microphone to ask the first question uh which is also the one of the same questions that's in the q and a uh you couldn't see it Matt but everybody's eyebrows in the room went up when you talked about how the time uh for mandated retrofits went down from 10 years to one year um because of the of the forecast and for for us in the US this is a problem especially for places like the Pacific Northwest where there's a lot of unreinforced masonry structures that that persist so one of the questions from the audience was what advice do you have for compelling the government um to help out in some meaningful way uh with issues like this one well that's a hard question um I'm not sure I really have a good good answer for that I think we were we were lucky in New Zealand that our that our government was um kind of open to these ideas I think probably where that came from was they saw the significant amount of damage to these types of buildings in Christchurch which had happened just um five years before that so they were aware of the consequences and the significant amount of damage that can happen to these these unreinforced masonry buildings when you have that type of shaking I think one of the the challenges we did have a little bit um which isn't really answering the question but was in the Kaikoura earthquake in Wellington we do have a significant amount of these unreinforced masonry buildings but the the particular frequency of the shaking in Wellington um from the Kaikoura earthquake was actually kind of long period that's where the hot the high shaking was so that the the these unreinforced masonry buildings flew through with no problems whatsoever so we had to do a lot of work educating um kind of the the decision makers and I think our kind of our buildings that the building authority did a lot of work on this to to explain different earthquakes affect buildings in different ways and just because we didn't see these damage and to the Kaikoura earthquake to to these unreinforced masonry buildings it's because of the differences in the type of shaking and that we have these examples in Christchurch um where kind of shorter period high frequency shaking does bring that down and we know that happens and we we have these good catalogs of these buildings around the city right that's not very helpful thank you um by the way if somebody else from the panel wants to fill in and any of these questions please just raise your hand and I'll call upon you if you want to add something else um Rangan um you have questions thanks um can you hear me I unmuted myself now okay sorry about that um so um Morgan actually my question to you when you show the western u.s. and um california all the earthquake distribution how this would affect the um the end results I was when you were showing the map I started thinking about what other seismological findings can help you like such as source parameters like energy because that's a big problem we are trying to understand the contribution of we have the magnitude then there's energy how if you kind of put those together can this help to sort of identifying the problem or what are the other research needs that could help formulating this well certainly the whole the model is moment balanced so we do have a target amount of moment that all the earthquakes in the model have to basically add up to within within a large region so that's why um if you do something like remove large events on enough faults and don't allow them to link up you have this thing where you need a lot more moderate size events basically to match that moment um I do think the issue though is even though we are we are trying to constrain moment it has huge uncertainties um one way to get the moment is from just the geodetic miles the total amount of strain in the system but that could easily be off by a factor of two or three depending on these you know what's the coupling how much of it is a seismic so even that that's probably our most direct measure um can have really big uncertainties the other thing we try to look at is just um historical seismicity and if you add up all of that what's the total moment of the earthquakes we've observed but as Matt pointed out in his talk that can also be dramatically off um probably not as much as an order of magnitude like he was saying but it could just be that um the last 150 years or so on the west coast has been really anonymous anonymously quiet quiet and there's some evidence of that if you look at um the rate and like at all the paleosizing trenches that our model wants to put in earthquakes and you compare it to um the observed rate and just what's happened in the last century which is that no earthquakes have hit those trenches it does seem like there's something anonymous at low about the last century or so in california and also in new zealand strangely enough so i think there's a lot of work there would be done to be understanding exactly how we can better constrain that energy balance in the models i think um torsten you have a question another question to to morgan i wonder if you could explain a little bit more how you go from the etus type statistics that doesn't have geometrical information to the triggering scenario is where all of a sudden you know the fault lights up so if you go from a blob to this rock-like scenario of a kelly wide rapture yeah so the way it works is um the etus model um you know has a certain number of aftershocks that are drawn from like a random seed based on you know first generation aftershocks based on the size of the main shock and then it picks um you know there's a distance decay where most of them will be close in so it basically picks the location and then for that location um we have a set of sources some of them will be those background sources that are off the model fault and some of them will include um long ruptures what we assume in use surf is that for every like rupture along a fault the epicenters that can rupture in many different ways and the epicenters are basically uniform distributed so we don't assume that it's more likely that for a given section rupture it'd be more likely to rupture on one hand or the other we just assume the simplest thing which is that episode is our uniform those are all you kind of unique first quakes and we pick one in the etus model so it means that um those aftershocks are going to preferentially be located in places where the model has first quakes unlike sort of a very basic vanilla etus model you get like if you have a main shock the aftershocks are just like kind of uniformly spread out in a spatially isotropic way and you surf that isotropy is broken based on where the background model puts puts the earthquakes which tends to be on faults so that's how you get those like beautiful pictures of the faults lighting up because that isotropy breaking I think uh Jeff do you have a question yeah so um give this one to Richard and Marco but the others might want to chime in as well in terms of moving from a time independent to time dependent models limiting factors could be in data availability or the physical understanding of interactions or the computational limits which of these are actually the biggest problems might vary from place to place but is it lack of data lack of physical understanding or lack of computation Marco you want to start I can say my thoughts um you go Richard and then I can add that case okay so um I will say that um I think the lack of a um a sort of decent computational framework that's widely applicable the etus is a great start in the work that um that Morgan and Ned and Kevin have been doing in California is amazing um and um but but it's you know everyone has to more or less go through and implement that from scratch if they're going to do it and they may want to do something else California also of course has about as much data as you can as you can reasonably expect on earth um similar to the Japan um uh so but in the absence of data you know you can use prior as you can use other ways of sort of you know uh bounding things are filling in data gaps um and um computational complexity is important but that you know I think I think that goes with the computational framework you use I think really starting to develop um techniques that can be um yeah really just developing the computational models um and seeing what happens once you start doing that then you can look at where your pan points are and adjust things but I think they're really just just getting started with it all is is um probably the biggest roadblock at least for us we'll probably be doing it in the next couple of years so I might have a better idea of um what I'm really banging my head into um you know in two years Marco do you want to add on to that yeah well yeah the first I think that is a miscellaneous time I would say unfortunately because we are definitely extremely busy but of the things that you've been mentioning Jeff I would say that data in my opinion is the most limiting one so when you move away from California, Japan, New Zealand and a few other places is the amount of information and the quality of information that is available is one order of my youth lower and I think that we already have problems in developing a let's say more standard time and dependent models so when you go to time dependent models clearly the lack of information is posing big problems in the calibration of the earthquake occurrence models and the more we learn in well well studied regions like New Zealand as as Matt was saying about for example the stability of the rates throughout time clearly this is causing problems in in areas where where information is far lower than the one that we have in those areas for the calibration again of the models um Mark do you have a question yeah um well actually Jeff somewhat asked mine but um this is Richard and you mentioned these process-based models and you were talking about the computational challenges and I guess my question kind of spins off of Jeff's little bit in where do you see the kind of the next steps in terms of really learn like are we are we limited by the physics that we're trying to put into those models or again it's really a computational challenge you have to go to these complex finite element based models for visceral elasticity rate and state friction I'm just kind of wondering where you see the breakthroughs in that area coming in and what are the limitations my um uh where I think the the the the sort of optimal next path lies is is being able to use I don't think that that you know taking a simple physics model a simple taking a you know a straight physics model and and letting it run for a zillion years is is how we want to do PSHA so what we want to do is be able to use physical models to to sort of transfer those into statistical models that we can apply so like you know Morgan mentioned using our sq sem is getting better ways of understanding stress stress transferring that can then tell you from a statistical standpoint the fall day ruptures is fault b you know going to be positively or negatively stressed by that individual rupture and understanding the conditional dependence is better um then um so I think that that kind of yeah you know being able to develop green's functions or being able to develop you know different sort of statistical approximations based on on physics better understanding of fault reloading from you know tectonics versus viscoelasticity versus afterslip um you know and instantaneous you know elastic coulomb stresses um understanding the relative combinations of both of all of those lets us implement reloading in a statistical manner that should be doing a better approximation of of the different physical mechanisms I want to ask a question before I let torsten ask a question and it's that on nobody spoke about uh geodesy and I'm thinking specifically about uh slow slip events and there's you know compelling evidence that they play a role in in moderating the the earthquake cycle so maybe I'll point this one either Morgan or Matt if you want to grow the seismic hazard map to Cascadia you have to account for that in kai kura there were very big slow slip events following um the main shock so how do we um deal with this extra phenomenon I think Matt should answer that too because they dealt with that at Wellington yeah um yeah so as as you mentioned post kai kura so we have kind of regular ongoing slow slip around the Hickeringi um interface and there's kind of three or four patches that regularly go at intervals from six months to some years and post kai kura and I guess and a key part of that is there's a locked patch that in the 25 years that we've had geodetic data right the Hickeringi just right below my feet right now has never moved um but everything around it is moving and post kai kura we saw something we'd never seen before that immediately all of those slow slip patches started started moving together um so there were some questions being asked about what did that imply for for the locked patch beneath me right now and there there aren't any existing methods this isn't a problem that anybody had really considered um before um so we we put together some pretty simple physical statistical models and looked at evidence from from catalogs that we had in terms of combining slow slip catalogs with triggering potential and so on um but there's there's clearly a lot of work that needs to be to be done there and I would hope some of the the dynamic rupturing models can get a bit more into that space and I think that's where we need to head for that that sort of work yeah thanks Matt um Torsten a bit of asking the the same question as to our ability or absence there of of linking the statistical behavior of the fault system to the present day state you know for example Morgan mentioned you know there are these these well-known patches along the San Andreas that are you know show way less seismicity than others and there are physical models saying that this could mean something in terms of where the fault is in terms of the cycle saying the rate in state framework and I was just wondering you know um given the uncertainties where do you think more specifically opportunities to go from what the system tells us right now to inferring within uncertainties what that means about the state right and so there's different ways one is the history depends the viscoelastic loading another one is maybe a fault constitutive law where where are some of the opportunities to advance beyond saying oh my the system should do that right now it doesn't look like it what does that mean as you know to everybody but in particular Morgan sort of following up on your last couple of slides where you had already expressed you know your you know some ideas there yeah I mean basically everything we're doing in the model is statistical currently although we are using the simulators for insight there has been a lot of talk about in future having rsq sim be a branch of the logic tree branch of the model if we can get to that point that we trusted enough to use it for that and reshazing the work of taking rsq sim all the way to like making a hazard map from it actually looks pretty similar to user three in part because the attenuation relations spread everything out smear everything um but yeah I don't know that anyone has lived into using it in a time dependent way that would just be using rsq sim for the time independent portion of the catalog running it for millions of years and taking some average after you spin it up so there is not that much work of actually like is there a way to put rsq sim into the state or any simulator into the state like what we see now or to run it in such a way that has huge amounts of variability than mine that very building just pick out the parts that look like today which maybe that is more of like the way you could go about it but how do you get the whole state to look like it is today maybe one part of it one part of it one catalog looks like you know the Carisa looks quiet but then there's a whole nother part of the catalog where I mean you don't really get elastic rebounds you could get like is there a way to like put in historic earthquakes and get that shutdown that looks like what we did today but I don't have any way to like force the model to do that or to mine in a way we're like as a whole like over the whole large region you're considering it would look like it does today yeah but I guess turned around it is does the physical model that sort of underlies the statistical model that underlies all of the hazard assessment does that show the sorts of states we're seeing right now it is very hard yes it is very hard to get the level of rate change you would have to use to explain 150 years later why those areas and on the San Andreas are still that deficient you can easily get a factor of three rate change from just an etos model with Bassanian background factor 10 you're going to have to throw out the Bassanian background you need something else physics in there or maybe like time varying background rate which is not really a background in that strict sense or like the stuff that not alluded to yeah we we don't have that that model yet but aftershock triggered alone you need something else maybe you lower I showed my paper like there's a lower b value and aftershock treatment you could barely explain it but it's really kind of a just so story of like yeah you have to lower b by the San Andreas and you'd have to put in like the maximum aftershock variability from or variability from aftershock triggering it's close but it's it's hard to explain without something else which is why we don't assume it's a rate change we assume it's a break in scaling so if I can if I can add to that briefly I I think that the one thing that that I would like to see I think many of us would is is really having the physics based models run in ways that can make sort of generalized predictions on the sort of states that we observe in comparing those to observations so we can understand what physics exactly would we want to add that can replicate you know things like like faults looking quiet over long periods of time plate boundary faults that don't show much seismicity you know some physical mechanisms may predict that and some may not but you know they may require a certain amount of complexity dealing with other you know complex reologies lots of other faults interaction and stuff like that in order to be realistic and useful so I think that you know some from from the sort of geodynamical modeling community some hypothesis testing and prediction makes making could really help us understand where the most useful avenues and and you know what paths may not be as useful in the future I want to make sure I ask one question from the audience before we go on to the break Luciana asks whether at subduction zones any of the models consider the stress transfer from other phenomena such as intermediate depth earthquakes or only the effects of a rupture or non-ruptor of adjacent segments I don't know if maybe Morgan or Matt want to tackle that one we actually yeah don't consider that in user three the subjections that is handled entirely separately unfortunately it's just the deficiency of them all and it's in in New Zealand we we can't can't ignore it unfortunately it's it's it's kind of one of the largest contributors to our hazard but we did for the for the basic PSHA work we're doing where the ruptures are essentially treated independently so you're what you're asking about is not handled and the slow slip work that I talked about and trying to understand the triggering potential there we did consider the effects that you're you're talking about with fairly simplistic models but that was kind of a key part of what we were looking at okay thank you guys well I want to thank all the speakers Marco, Morgan, Matt and Richard we're going to take a short break until 2 50 p.m eastern time and we will reconvene then thank you oh it's my pleasure to welcome you to the second session this afternoon we are now connecting the time-dependent hazard to loss impacts and risks we have great speakers and really exciting topics this upcoming speakers our first talk is by Dr. David Wald and Dr. Kishor Jesswall. Dave is a researcher of physicists at the U.S. Geological Survey he's involved in research development operations of several real-time earthquake information systems at the USGS NEIC he developed and manages shake map and did you feel it and is responsible for developing other systems for post earthquake response and pre-earthquake mitigation including shake cast. Kishor is a research structural engineer at the U.S. Geological Survey he leads the development of pager systems earthquake casualty and economic loss estimation models and the development of earthquake risk related products for building and critical infrastructure he has also contributed to gem development efforts by participating in and contributing to the number of gems earthquake risk related projects Dave Kishor to you thank you and share my screen and second how's that sound how's that look okay good all right thanks this is a great opportunity really appreciate the chance to do this um I'm a seismologist Kishor is a structural engineer and we're going to help take this from the realm of the hazard into the potential impacts and that some of the uses some of the users Matt Gersenberg already started on that path and it really is great to see the the presentations on the hazard front I should say that this is not exactly our area of specialties so what we did rather than try to focus on the systems that we work on directly and collect those two that hazard that's time dependent we are actually going to do a little bit different approach here we're going to take what we know about losses and risks and try to put together a small book here called time dependent hazard loss risk for dummies and the reason we're doing this is we really need to it's sort of our own self edification in terms of understanding that the relationships of time to visit the hazard and the and the uses that are out there I haven't seen a lot of collective information about this and so we're going to try to put this together on our own before I start usual disclaimer we're going to mention some different firms and different companies working on this so that doesn't represent a in endorsement by the US government and our opinions are our own and not representing necessarily the USGS so a table of contents a brief run financial decision making and then some examples and time frames of time dependent loss well everyone that's talked about this is talked about different time frames and I'm looking at this from an earthquake cycle perspective so decade centuries and longer induced earthquakes which can last for months and years depending on the injection rate and other things with quick sequences as we talked about and then aftershock sequences and I think it's an easy way to break up the time scale of these things in terms of how they're used I'm not going to go through these acronyms but a couple important philosophy entries one is we've seen loss equals hazard times exposure times vulnerability but I want to make sure that we are clear that risk involves a probabilistic hazard so Keisha and I do a lot of loss estimation where we have the hazard at hand it's a shake map it's an uncertain hazard but we use that to look at the exposure vulnerability and get an uncertain impact or loss and that's challenging enough when you put the probabilistic hazard in front of that then you're adding an additional uncertainty term we have to recognize that you know the loss component itself is very challenging so we need to think about these things and then I mentioned that reinsurance is insurance for insurers and the important point here is that insurance is based on you know statistics of large numbers and more regular occurrence and the insurers are really interested in the tail end of the of the loss curve and so they're very interested in these these low probability events and getting that right is is most important to the financial sector and a lot of the technology a lot of the analysis in this realm is actually done in the financial sector so we've we've used our shake map pager tools and handed that off to a number of different financial entities and I just like the slide because it gives us a sense of who's using it for what and obviously disaster aid and response and media presence are clear but a lot of our users are in this risk and financial risk management realm and that's again a lot where this research is being done and so the insurers the reinsurers and all the risk modelers that are involved with getting rates out of these dependent time dependent independent models are really thinking about this problem a lot and and applying it in the in the real world sense one of the things on top of the the usual insurance and reinsurance realm is is cap bonds you may or may not have heard of cap bonds catastrophe bonds are really a large part now of the at least in the earthquake realm of the possible insurance products that are available to different entities and effectively they're there's insurance products that are sold by an entity and like a government or city of Tokyo or country of Mexico and again a risk modeling firm calculates the odds of that disaster occurring just like you would in a long-term insurance product and then the investors get paid a fairly high rate of investment but they could lose their principal disaster hits and whether or not this bond is paid out and what the cost of the bond is all determined by these these nice hazard maps and and now we're turning towards more time dependent components but the USGS is also interested in this because we are the independent entity that determines whether this earthquake of interest triggered a particular loan and as I as I should mention is a huge amount of financial investment these things out there and so they're pretty important to know about the difference about the difference between cap bonds and regular insurance or reinsurance is that cap bonds are triggered parametrically and then it could be paid out immediately after an earthquake so a lot of different groups use shake map or they use NEIC national earthquake information centers magnitude location to trigger this bond over this payout rather than waiting for what is typically known as indemnity insurance where you actually look at the damage and see add up the claims over time and provide coverage so this is a much faster way to provide resources after a major disaster and it's become very popular to do so and these things do get triggered this is a NEIC magnitude location triggered $200 million payout for Peru and Ecuador has had a similar payout and other countries have Mexico and among others over the years so these are kind of important to understand that you know money makes the world go around and in a lot of the business here a lot of the uses of the downstream information is used in the insurance reinsurance and these other financial sectors there's some I did a little summary of this but there's a really good report out by the OECD on financial risk management and it's an excellent way to get up to speed on this particular topic but the the main focus here is really the uses of these these different timeframe potential time-dependent hazards and some of them let me just move this some of the users and uses and types of decisions that are made Matt covered a little bit but let's just look at these in terms of timeframes we have basically the long-term PSHA which is of course used for insurance and reinsurance planning mitigation building codes but also these cap bonds and a lot of risk analysis there's nothing nothing surprising there induced earthquakes now brings things down to a much shorter timeframe and there too insurance is relatively responsive to that kind of timeframe and yet there's not a lot that changes in the longer term picture for most products including building codes on the induced earthquake front but industry regulations certainly are dependent on these kinds of observations and so that's a different timeframe when we get to earthquake sequences then as Matt mentioned you know there's there's a lot that can be done as long as there's some confidence in the in decision-making and the confidence in the science which is our biggest challenge here and so evacuation preparedness even staying outside under certain circumstances in vulnerable places supplying relief supplies getting them ready other forms of mitigation and then looking at building degradation like like Matt mentioned it will talk about a little bit more and also in the course of the earthquake sequence people ask the question what if you know what if something bigger happens or what's next and the use of scenarios in that case is extremely important to try to at if I get people some information about what might be coming and then lastly the aftershock sequence of course here we have basically very important needs for knowing what's next and what may happen and so the urban search and rescue and other people on the ground building tagging and other people that are that are in the field so this could also include shoring up buildings and looking at the changes in building vulnerability potentially evacuations and also reconnaissance plans I mentioned equities market decisions and ILS which is insurance link securities there may be also financial decisions that are being made rapidly and on the fly that that we're not aware of cap bonds are publicly traded so we know these but there are probably also financial decisions being made on the fly as we put out aftershock warnings and statistics and I also mentioned here that I think personally and we this is fodder for the discussion that you know these things go from easier to harder as you go back in time and back over time scales and and the reason I say that is of course an aftershock sequence or quick sequences the main location the main region and time frame has already been set something's already happened with induced earthquakes in fact you have a regional footprint that's likely to be affected but with earthquake cycles both the time and the space is complete question mark but there's a lot more to that story so earthquake cycles we've actually been using time dependent risk hazard assessments for risk and loss calculation for some time now this is a 2008 paper for the world conference and again this came out of the rms group so the risk modeling community has been on top of this for some time and this is just the change from the the ratio from what would be a time dependent time independent to a time dependent residential losses in california and of course things like the hayward fault and the southern san adreus falls all the way down to the salt sea light up and are substantially higher than the background rate and so these things have been appreciated in the industry and model for some time the challenge is always how good are the input models but the the engine for computing these are certainly there when it comes to induced earthquakes there too there's been some great work on looking at risk taking the hazard that's very very time dependent the map on the lower right shows the basically the hazard map for a particular time period of 10 percent annual exceedance for 2015 versus what's happened more recently when the seismicity has gone down and this is an enormous change in the in the rate of of background information a background level of seismicity that turns into very very different loss estimates and these are aal annual average annualized losses and for 2015 obviously people are very worried and things came back down interestingly when you look at a map of the annualized loss over that year the thing that shows up here is not the areas of really high induced events but really the exposure in Oklahoma city with respect to those events and so in areas where there's very little to expose obviously the seismicity is going up but the risk is not going up because the exposure is so low here is where the exposure really is concentrated and of course over the background rate or compared to a more return to close to normal and the peak in 2015 the annualized rates are very very different in terms of loss estimates we can also use these opportunities to help people plan for earthquakes and we've got a new application called consequence-driven scenarios where in this case the Oklahoma's department transportation wants to know you know what if situations and they want in particular to exercise multiple of their other divisions after an earthquake and want to know what kind of earthquake may come and cause that type of problem so we can actually solve for the most likely earthquake which is the lowest magnitude that occurs at these division boundaries that would constitute damage to several of their bridges and so they're very concerned about the time dependent hazard that that they've seen and now they're interested in in doing something in form of mitigation and planning and so we can help them with these types of scenarios when we see these elevated areas of seismicity. When we look at sequences Matt did a great job covering some of this there's other places in the world that are highly sequence-driven central Italy is certainly one of them and a lot of work has been done in Italy on the on the combination of time dependent and time independent hazard and the upper plot in that is simply showing that the difference between annualized losses in the active period of a sequence in central Italy versus what happens with just a Poisson background and what it would compare in a random year looking at time dependent and time independent calculations so these things are being done fairly routinely and I think is a great example of now being recognizing the time dependence in rate calculations also in central Italy I won't spend much time on this the again there's when you start looking at e-tas and adding that to the the background rate you get much higher potential for losses and in this kind of calculation people are looking now not only at at the change in rate from the time dependent perspective but also looking at what would happen if the earlier event damaged the structures and what happens then as you get you go from having an impact impact structure to damage structure and the chances of your having exceeding collapse probabilities go dramatically when you start with a building that's damaged to different degrees as it's done in this calculation also in Italy there's operational earthquake forecasting that's being done in the background and it's the fully functional again dependent on the input model but our colleague Inyo Rolino has been working on this and this is something that the Civil Protection has been aware of and looking at for some time and what you find and what's always challenging about these calculations is when you can do the the model from end to end and you get a probability of collapse over some time period in this case a week and that probability is very time dependent depending on what's going on with the background seismicity but the challenge is always this and that is the the background rate changes with time but you don't see an increase in the background rate until something substantial happens and so again the difference between operational earthquake forecasting and aftershock forecasting sort of come to the to the front here where at this point you can tell there's a high change but that's also because the largest event in the sequences happen there and and then lastly aftershock sequence here for some time now work's been done on looking at the risk changes for aftershock sequences largest one is this tohoku event where obviously magnitude nine is going to have potentially very very damaging uh magnitude sevens and eights over a long period of time and that's certainly the case and these calculations can be computed and used for uh AAL and changes in in in rates for for customers key short you want to take over and talk about the this very important topic of having structures damaged from the pre from the main shock and what happens in this subsequently okay thank you so much that's a great background Dave I'm so excited to join here and again another disclaimer from my side is although I'm an engineer I'm not representing the entire body of and spectrum spectrum of engineering concerns that basically look at these problems as a student of this problem I have a great opportunity to learn working closely with USGS scientists to think about these problems think about the long and complex earthquake sequences and how the engineers should think about modeling these sequences through damage and loss and risk so this example shown here on your right is basically two earthquakes happening back to back magnitude 5.9 and 5.8 in central Italy and as you can imagine depending upon which location you are building or you are the the amount of shaking will be very different and essentially what it means is that uh a building which is located close to the second shock may experience a much stronger shaking than the first shock even if the magnitude of the first shock is higher and thus there is usually the conundrum here is that typically seismologists tend to say that the aftershocks will be smaller and they will decay over time yes that is very true but where those aftershocks are happening and how intense the ground shaking could be at a given location if those assets are directly sitting on top of those locations those structural demands could be significantly higher and the right hand side of that panel is basically a composite shake map of the two earthquakes which highlights that in any given typical earthquake sequence there could be multiple such cycles which the structures could be exposed and there could be a lot of complexity in terms of how the structure will behave depending upon if you think about the left hand side panel motion one may be weaker which induces some level of damage to the structure and essentially great additional problems to the structure if the second motion happens to be even stronger than the first motion so this this problem as the seismologists are interestingly studying about the time dependent earthquake of predictions engineers are not left behind here many of my colleagues in academic and research community have started looking into this problem over 10 years ago and the process of developing these models have evolved over time which I will discuss in the next slide essentially what engineers have done is look at the structural system and develop a numerical model of their structural system and try to expose this set of records as men's shock and aftershock combinations before I explain this damage dependent fragility modeling let me make a quick remark here the design philosophy the building code design philosophy has heavily relied on probabilistic hazard models that have come out from USGS hazard mapping team and those are basically time independent hazard models that means that the potential of seeing a multiple rounds of strong shaking in a in a small amount of time is discarded because it's not considered in the hazard modeling and thus a typical the engineers traditional we are thinking about this is if you are designing for a very hard very strong earthquake something like a mc level earthquake which happens every 2500 years or designing for a particular set of target ground motion which would give you an expected performance of collapse that you want to design for then typically the the impact of this sequence of earthquakes and the aftershocks would not be directly considered into the design consideration however the my academic community have been looking at the problem carefully and the the example here are showing basically Meera Raghunandans who work on reinforced concrete frame looking at the main shock and aftershock combination time histories to expose into the structural models and evaluate the the building performance which is essentially dependent upon the first level of damage or second level of damage how the second level basically induced further damage or potential collapse in the structure. Next slide please again Professor Henry Burton and his colleagues have looked at this problem specifically for a code defined reinforced concrete structure in which they develop a different archetype structural models for reinforced concrete structures and looked at different thresholds of damage by inputting the combination of main shock and aftershock time histories to the evaluate the structural performance and the authors consistently find that if the main shock if the first shock didn't produce a significant level of damage then basically it doesn't matter too much but if it introduces some level of damage maybe a slight or moderate damage then the collapse performance is significantly altered in these kinds of post first shock motions in the structures and essentially these things need to be considered systematically into the future building code and my next slide basically highlights that this this kind of concept of hazard modeling into loss and disc has already been talked about and essentially this research highlights that there are ways to incorporate these hazard complex hazard calculations end to end to the risk calculation provided you are making sure that both the fragility and the exposure are incorporated properly for risk calculations in this particular study the authors haven't considered the damage dependent fragility but there is definitely a possibility to enhance these results for further consideration. Thank you Kishore let me let me just finish up very quickly here so we can actually look at for any any collection of earthquakes an aftershock sequence or sequence of earthquakes we can make a composite shake map which tells us at a particular location in this case a bridge the number of times it's shaken over some particular level and that can be used for this kind of damaged degradation over time to start with a damaged structure rather than start with an intact structure. I'm going to skip over this last things and just get to some of the important questions that I hope you can talk about. One is to follow up on Kishore's point and that is research is needed to find the post-main shock behavior of structures we only know how dam structures behave with with some limited models and it's very difficult to get information about what happened during the first shaking and have that damage then damage building also be damaged in a second subsequent event so that's very unusual to have that very well documented and as as some have mentioned current loss models like pager they we do not call the inventory adjust the main shock losses for subsequent events so correcting for damage changing the inventory that's early damage already collapsed from prior shaking is really challenging and we don't know the inventories enough nor how they behave nor how well they were shaken to understand the changes and losses at this point it's a great topic that we should be spending time on in fact the occupancy is a real challenge people will obviously sleep out they'll leave buildings they'll be the red tag buildings they won't be occupying so trying to predict losses from aftershocks presents serious challenges and then lastly that the handoff from time dependent hazard to time dependent risk depends on the quality and competence of the hazard model and so the question to me is not whether we can do this I think the models are there question is how well constrained are the hazard models and and we know from experience that the loss models are particularly uncertain so we have to be um concerned when we convolve that with an uncertain probabilistic habit uh that's time dependent thanks I'll leave it at that. Thank you David it's really important um part that you covered and I think we will save the questions to the end of the session so our next speaker is um Katsu Goda Dr. Katsu Goda Katsu is an associate professor and Canada research chair in multi hazard risk assessment at western university Canada his research is focused on catastrophic earthquake related multi hazard risk management from economic and societal viewpoints. His research interests are broad and multidisciplinary and cover a wide range of academic fields including engineering seismology and earthquake engineering so um Katsu take it away. Okay thank you very much for introduction um I am going to be a bit away from a shake hazard and shake risk um and then I'm going to touch on um I think importantly important uh equally important uh uh topic uh which is related to tsunami and which is affected also by the time dependent hazard so that is my uh focus so I would like to um because I'm Japanese and then the um I have witnessed some of the damage uh happened after the 2011 Tohoku earthquake in Japan um I'm going to start with this uh picture um this picture is taken uh I think two days after uh the earthquake and you see the ocean and then um so here's the ocean and then the the tsunami massive tsunami uh attacked the coastal line and then you see that the black kind of uh you know areas those are already uh in undated areas and then there were fully developed uh towns uh communities around here all washed away so that's uh you can imagine that the how powerful uh this earthquake uh and then tsunami could be so uh this 2011 Tohoku earthquake and tsunami caused um close to uh 20 000 deaths which is very significant uh for country like Japan and then a half trillion uh economic loss and then uh still ongoing uh Fukushima Daiichi nuclear power plant power plant uh crisis which also costed at least another a half trillion dollars and then this is my uh the picture I took um how we look on the ground and it really smells bad and then the uh it's just the massive destruction uh happened and then uh this is the damage survey uh that has been done by the the government uh after the event so uh you can see it's color coded uh for the uh the severity of damage so the collapse happened uh almost entirely uh to the nearest community which has about uh two kilometer uh distance uh from the coast so uh in this community about 50 percent of uh 3 000 uh buildings has been washed away completely and then a lot of people uh have uh killed been killed so um since then I was actually the earthquake engineer and then also the seismologist but the after this event I just kind of uh transformed myself uh to to touch on um the uh the tsunami and then I have been applying the so called catastrophe risk modeling approach which Diego uh introduced in the beginning of uh this session um so the risk is equal to hazard times exposure times vulnerability and then I'm focusing on the hazard part but also I just want to uh point out uh key uh key share uh and then that David uh already uh touched on in the previous presentation but the this is a lot of important aspect uh to transform hazard uh into risk uh which requires exposure and vulnerability so the transition of hazard to risk are usually done through so called fragility model so this is the uh the damage survey results so this is actual damage survey results uh of 250 000 uh buildings and if we plot the the the extent of the damage as a function of um uh the inundation depth how high the water was then we see this kind of trend so we can immediately see uh that if the inundation uh depth is higher then um the the proportion of damage uh becomes uh severe and more severe and if we turn this uh graph 90 degrees then uh we see uh so uh standard fragility uh model uh which usually takes the the tsunami hazard parameter on the x axis and in a y axis is the the proportion of the damage and then that's how we can uh use and then develop uh this tsunami fragility curve and which feeds into uh this uh exposure and vulnerability component such that we can uh analyze the losses so the catastrophe risk modeling in the context of the time-dependent hazard uh i am applying uh has um can be divided into two major components the first component is the earthquake occurrence model which captures the renewal model and then the in relation to the magnitude model so those two in fact uh i draw uh with the separate bubbles but in fact this needs to be dependent interrelated and then once we have uh renewal model time dependency model and in a magnitude size model uh then we can generate the stochastic event curve and on the other hand uh we can uh simulate uh the earthquake slip model and then we can uh perform the ground motion uh simulation and also the tsunami simulation and combining that with the exposure and vulnerability model we can obtain the loss distribution once we have event cut load and then the loss distribution we can come up with the time-dependent multi-hazard loss estimation so that's the the framework i am i have been applying uh for my research so just to give a little bit more information about the time-dependent aspect because this is the focus of this uh course seminar uh the renewal model i'm applying is that the standard one so the inter-arrival time distribution uh can be specified it could be uh normal it could be a bpt model or the variable model and then the important aspect of this renewal model in the the risk and then of course hazard calculation is to treat the the first event separately because the first event at the the present time uh we already have observed that the no event haven't happened uh occurred uh since the last event so we need to take that information into account and then we need to re-scale and then shift this distribution and a subsequent event can be treated as a renewal model and in this the renewal model needs to be coupled with the the management model and then the uh what uh management model uh might be applicable that depends on the the specific region we are interested in and in the seismicity the the tectonics uh influence those kind of choices of the manganjil model so uh in my uh research i use the uh the good and the good standard model as well as the characteristic uh model but the those has to be seismic movement matched such that the same amount of energy will be released and then by combining this renewal model and the manganjil model we can come up with stochastic event catalog um the the loss distribution part uh we can generate a range of the slip distribution so the different manganjil ranges and then the slip distribution might change geometry of the slip aspect source model uh could be changed so there's a lot of scaling law available in the literature so we can implement those scaling law to come up with this stochastic source model for each of the stochastic source model we run the ground motion models with spatial correlation to simulate the seismic intensity at the the building location and then we can also solve uh shadow water uh equation uh to to simulate tsunami and nation and then uh we can uh place the exposure model so the building distribution and the cost model those uh needs to be uh specified and then the the fragility curve for the shaking and in the tsunami uh also needs to be specified and then once we have hazard information from the footprint simulation and in the fragility model we can combine probabilistically to obtain the loss distribution so after this probabilistic calculation what we get is the as david introduced exceedance probability curve in terms of losses and then for my case i given exposure model exposure data set i can come up with this say uh the blue curve which is only for the shake loss or uh tsunami loss but also i can combine for each event i can combine building by building the combined earthquake tsunami losses and then i can sum them up to come up with this the multi uh hazard uh loss curve so i'm going to mainly talk about this green curve uh rather than the individual earthquake and tsunami curves and then um for the risk management purposes we can uh define some sort of critical uh scenario the critical scenario could be defined in a different way but the um for my research uh i tends to use the the typical return period which i use for the risk management purposes so such as the 100 year return period level 500 year return period level or the 1000 year return period level in terms of total losses so if i take three points from the exceedance loss curves then uh the one particular scenario could be uh extracted uh that could be magnitude 8.2 or magnitude 8.8 or 9.0 depending on uh the probability level we look at given this source we can pull out another shake map and also the tsunami and nation map so this can be considered as a multi hazard loss and a joint hazard maps and i found that this kind of presentation might be useful uh for risk uh communication purposes because it shows that the what kind of aspect scenario we're discussing and then what kind of shake uh damage could happen and also what kind of uh the tsunami damage could happen uh in one uh figure so now i uh i'm going to uh talk about the renewal uh the time-dependent model and then the i implemented the the time-dependent model as a part of my recurrence model and then the just as a kind of illustration primer the the inter-level time distribution can be specified in the renewal model framework and then the standard model would be exponential distribution which corresponds to the time-dependent Poisson model which is in blue but we can also specify the popular model such as the log-normal Brownian motion passage time BPT model or the waibu model so those are shown uh it with the different colors but the same uh main occurrence rate which is the 100 year recurrence period level i mean recurrence period so you can see that the depending on the which distribution type we adopt we see that the very different temporal behavior of the earthquake recurrence and in this renewal model would be uh is a flexible in a sense that we can capture the different degrees of confidence about the just recurrence period so if we are not that confident then we might assign the coefficient of variation which is essentially the mean divide as a standard deviation divided by mean uh as the 0.5 or even larger value and we can also uh take into account the time a lapse since the last major event so if it's zero uh starting um the starting from the the right after the the major event happened then the all the distribution starts from zero but if uh it is in the middle uh some time already has passed then we need to shift and then release scale uh those probability distribution so the renewal model is flexible enough uh to to take into account those important aspect for the hazard and then risk modeling so now i show uh just brief uh example uh just illustrating the sensitivity of the time dependent model on the the risk uh calculation so uh this figure uh shows the exceedance probability curves for the combined earthquake and tsunami loss uh for the iwanuma in tohoku region which i showed as a picture and if we adopt the time dependent model um we obtain these blue curves and then considering that the 10 years has passed say uh 2011 now 2022 so 10 years have passed and then the uh the typical mean recurrence period for the mange 8.3 anela buff uh event is about 100 years uh for tohoku so then uh this is about 10 percent of the mean recurrence period has passed then uh depending on the which models uh to be adopted uh or uh underestimate or the the produce that the lower risk estimate compared to the time dependent time independent model which is in blue but if the time i mean recurrence is due say 100 years has already passed um and then the mean recurrence period is 100 years which is a somewhat similar situation for uh nankai uh japan uh situation then uh time dependent model as you can expect uh would produce uh the higher risk estimate and in order of um this uh the risk estimate uh is somewhat reverse and then it really depends on uh which model uh to be chosen then uh we can also do a similar kind of um uh sensitivity analysis uh just changing uh by fixing at a probability distribution to wide distribution but the the change in the confidence about the um how likely uh those probability uh the recurrence would be so this uh confidence could be represented by the different coefficient variation say 0.3 0.5 0.7 so the larger value corresponds to flatter distribution and then become more and more closer to the time independent model which is the Poisson distribution so then uh you can again see that there for the current current situation for the tohoku um the uh the the higher confidence of uh this uh the recurrence period recurrence process uh would produce the much um lower uh uh risk estimate but if the the time is due say 100 years already passed then the the order of the uh the risk curve uh would be reduced reversed so uh it's important to uh to consider a range of the values but um in reality it is not that easy uh in my opinion to to really to specify which parameter which probability distribution uh is correct one so then uh the practical approach uh adopted in a psh and pth is to to to embed uh those information uh as in the logic tree so we can uh consider a range of combination of those models so uh here i considered uh three different uh time dependent distribution model uh three different coefficient variation model and in the three two different uh mountain model which i didn't discuss much today and then uh you can see that the the logic tree uh from the 18 different curves uh could be uh obtained as a red as a mean estimate and then that can be compared against the time independent model which is in purple and if the time is due say 100 years already have passed uh for the recurrence period of 100 years uh process then um the um the risk estimate uh would be much much significant so uh in that uh case we really need to do a minute yep uh clear we need to consider a right range of uh possibility as a part of this uh logic tree approach so uh this is my last uh slide so just the um uh summarize the time dependent uh hazard and risk model for script tsunami hazard uh are important especially for the long-term infrastructure risk management and then just risk finances so uh insurance industry the insurance industry those are important application and then the um i didn't talk much about the the segmentation but the other presenters have pointed out that segmentation of the fourth plane uh is an important aspect uh that is also applicable for uh the subduction aspect like tohoku and nankai so there's several uh applicable model uh in the literature um that can consider the multi-segment rupture uh in a time dependent way and then in all cases i would emphasize that the logic tree model needs to be taken into account and then the aftershock model can fit nicely uh although i didn't uh talk about this uh into this uh uh time dependent cap model uh framework and the last thing i think it might be important to point out uh might be um the time dependent model uh is really useful uh when we want to expand the horizon of the gist of risk modeling such as to consider the climate risk sea level rise yeah so in that kind of case uh integrated multi-hazard cascading compounding risk framework uh requires this uh time dependent model so uh thank you very much thank you katsu great presentation um i guess we're going to adapt the first session um uh discipline a year and we'll jump to the last talk and then we will gather again to uh put together um the qna um end of this session so our last speaker today is dr marlene nist she is a senior director at risk management solutions where she has worked for over 16 years she leads the earthquake source modeling team for rms earthquake models for the rms north america earthquake models release marlene over so the hazard component development and its validation marlene thank you so much um can you hear me um um my name is marlene nist as i was just introduced i work for risk management solutions um in the earthquake um source group um rms um has um has been working on development of catastrophe risk models for the mostly for the insurance industry but not only for the insurance industry um and uh to name a few models we just we have windstorm flood hurricane it's not just earthquake um for this talk i would like to focus a little bit on the components of our risk model um and uh just have one slide to just point that out but um how we model um the the you know what the risk framework looks like and then um i think i'll i'll just focus on um mostly on examples of um time dependent impact on a risk metric so first i would like to um outline a few um standard risk metrics that have already been discussed in previous discussion so i've been set up really well here i don't need to spend too much time on it um and then i would like to show uh the influence of time dependence um on average annual loss on ep with a few examples exceedance probability curve um in south francisco bay area and uh for istanbul turkey um and then also focus on um the comparison of the impact on risk by different time dependent models and how it really matters but time of time dependence you um you adopt and to be discussed in this presentation um i uh i would like to point out that um i will mostly talk about time dependent renewal models um i call we call a short term i think matt would call it medium term um and their implications for risk compared to the time independent long term models um we have some way of treating um aftershocks um and we definitely incorporate earthquake sequences um but um i will mostly be talking about these these time dependent renewal applied to year to year multi-year insurance contracts um where the time dependence is modeled as uh as a probability default so back is on our region um not real time we're short term aftershock activity earthquake forecast although that's definitely a challenge that we need to tackle um in the next uh the next five years or so so um this is the um the earthquake risk modeling framework as i have it the next sort of providing my mind where we have um the stochastic event set or simulated event periods that's what we are moving towards to and i'll have a little bit of of um information about that towards the end where we define a set of series set or series of earthquake events um by um looking at our source geometry their magnitude and their probability their frequency um and then um we um if we look at the um the impact on oven oven oven hazard model on an insurance portfolio we calculate the severity of shaking at each location in that portfolio due to this stochastic event set and the severity of shaking is then adjusted by taking the geotechnical component uh into account with which consists of the soil amplification landslide liquefaction um at that particular location in the portfolio and then what we call the vulnerability or the the building response component or um where we look at the the value of structures for life or infrastructure at the particular location and we we look at the mean damage due to that severity of shaking at the location and then finally in the risk quantification we we calculate the loss impact so we actually we'll move into the financial um part of the equation um so this has come up a few times before um each exceedance probability curve is an important metric that we use all the time when we look at um at at risk um it plots the probability of exceeding a particular loss level in a year so on the in the x axis we have loss and then on the y axis we have annual probability of exceedance it's not like the standard way I expressed it in a percentage but if you have zero percentage there is no um probability of exceeding that particular loss level in a year and when you have 100 obviously it's uh it's uh um high probability or it's first it's certain that you will access that um probably that level that level of loss um and so to build an EP curve for an insurance portfolio uh we calculate the loss at each location for each event and then we start building the curve by starting with the event with the highest loss moving off to the second highest loss combining the probabilities and then moving all the way up to the to the to the lowest loss and this is really uh important um parameter uh no return period losses are important parameters for um um um solvency issues and the management of portfolios and and oftentimes also tied into like certain insurance requirements uh depends a little bit on the country on the region but for instance in Canada um there is a capacity requirement of a one in 500 year year loss so if you want to do business in Canada you have to make sure that you have um that money in the bank and then every channel loss has come up a few times already I don't necessarily need to say too much about it but that's basically if you look at a portfolio you uh you take um um the loss of event one and calculated by its annual rate and then you you sum all the events within your portfolio to get the average annual loss on your portfolio um in general if you look at the exceeded probability curve you can sort of um say that the average annual loss is the the integral of the exceedance probability curve and then what we usually use it for is to uh to to you to determine the premium so insurance premiums depend very much on the average annual loss because it's in a way an identifiable risk drivers and in most regions the the average annual loss is mostly controlled by the more the moderate magnitude more frequent earthquakes that you can you know it's kind of sort of identifying this particular area of the of the bigger um this is the time um independence um contribution to average annual loss statewide for California um in the bar charts and what we see is that um that the largest contributions are indeed coming from this uh magnitude range um between 6.5 and and 8 mark so now I would like to move over to a few examples of um of of impact on a uh a l and e p um what we see is that um for instance here this is the same example of um of average annual loss in California in the front we see the time independent dark blue bar charts and in the back we see the the light blue is the time dependent um average annual loss contribution um by magnitude bin in California and for this we adopted the the the the bpt time dependent implementation by field and others in 2015 that morgan highlighted we did not adopt edas component of of that time dependent component so that is lacking um but what we see in general too is that statewide approximately depends a little bit on you know what line of business you look at or what region but in general the time dependent average annual loss uh compared to the time in that um average annual loss statewide is about 10 percent higher and what we see is that this there's a shift um in contribution to average annual loss from the 7.5 to 8 towards 6.5 to 7 and um how that works can be um sort of highlighted in this this map on the right where um we see um the in in red it this is basically um loss cost so loss cost is normalized average annual loss um for of time dependent um compared to time independent um and uh dark orange shows regions where the time dependent component is higher than the time independent component um light orange is where it's sort of similar and then like yellowish orange very light orange is where the time dependent component generates loss that is lower than time dependent and so um what you can actually see in the Bay Area is this um this shift from um from high your time dependence tied to the Hayward and Calaveras fault system to um lower time dependence tied to the 19th or the San Andreas fault system that just had a large magnitude event um where versus the Hayward and Calaveras fault system having um like high dependence like are considered to be due and and so you know premiums are in terms of premiums are set based on on this high level of granularity so it's important to take it into account um and then a completely different example is um the time dependence on average annual loss and an exceedance probability in in Turkey where um the the Northern Anatolia full zone is uh is considered to um to well it's it's it's it's it strikes a full zone in the northern part of the in Turkey um it's uh it's a kind of a classic example of um of um a series of events following each other in a short amount of time and um um this was this is a paper from um from uh the geological survey a while back but then there's been you know more updates um um since then but it sort of highlights really well um sorry highlights really well what's going on there and in general I think this has been um um identified by lots of other um um research projects as well as this gap under Istanbul where the Northern Anatolia full zone hasn't ruptured and where everywhere else it has ruptured and so um if we were to assume that that um well so and then so yeah on the left side we see the exceedance probability curve for a countrywide portfolio in Turkey but because the um the exposure at Istanbul is is really high and the risk is really high that you can basically say that even if you have a countrywide portfolio your risk is controlled by what's happening in Istanbul um and if you look at the exceedance probability curve or we have return period here and goes here then um we we take a look at the events that are in the tail these are megatrust earthquakes on the Hellenic subduction zone and uh very low probability large magnitude background sources under Istanbul with um return periods of you know in the thousands of years so very unlikely then we have a series of events that are multi-fold ruptures on the Northern Anatolia fold zone so rupturing more than and just single segments a fairly large magnitude events with also fairly low probability but then when we get into the return the hundreds of return periods we see that um Northern Anatolia fold events but within the sea of Marmara seismic gap start to pay to pay big role that that is this series of events and um in blue what we see is the time independent uh exceedance probability curve and in red we see the time dependent probability uh the exceedance probability curve um where we adopt like a very extreme form of um of time dependence with uh where we say that the event can happen within the next few years and then we see that um that first of all the time dependence average general loss can be uh about 40 percent higher for um just because Istanbul is such an important place and um in the return period between 10 and 10 and 200 so significant increases and and that turns out to be a really important factor because in order to if you were you would like to do business in Europe um you have to have a reserve requirement of of a 200 of the one in 200 year loss and so what kind of time dependent model you adopt for the Northern Anatolia fold zone is very important um and then Cascadia is a is a is another example of of where we should take into account we should take another look at what how time the different time dependent models impact risk there um we have um uh if you if you uh or for a time dependent model just for the megathrust um can be done in several ways we um if we look at the 10 000 year turbidite history that was um presented and published extensively by Goldfringer on 2012 and after that a few more studies after that that time as well we um we see that the 10 000 year just average over over a 10 000 year turbidite history that the megathrust event recurrence about 525 years uh the time since last event is 322 years that means that we are kind of a little bit over the middle of um of its recurrence and if we model time dependence with a simple average recurrence then it's not very impactful we see that in terms of return period uh loss we have maybe a few percent a l is a few percent higher if we model time dependence with a simple bbt average over 10 000 years however in the same uh studied by um Chris Goldfinger he also looked at the um at the the possibility that um we are um that's the the cascadia subduction zone model um ruptures in clusters and and so what they concluded was that um the probability that the cascadia subduction zone is currently within a temporal clusters is significantly higher than the probability of it being in a gap and so if we were to take uh and if you look at this last cluster um that um hasn't has an average recurrence of about 330 years with um um recurrence intervals varying between 180 and 500 years so that means that the average recurrence thank you that that means that the average recurrence is about is about the same as the time since uh since last event and so if we were to incorporate that extreme version of time dependence then suddenly um the uh the average annual loss goes up uh by almost a factor of 10 and the 500 year return period loss um is is about three times higher than uh than the time independent or time dependent dependence more at you know 10 000 average model and and so that is important for for Canada because as i mentioned um before there is a 500 year return period capacity requirement there um i have a quick example i hope i can um discuss that quickly this was kind of a nice um follow-up on on katsu's presentation where um we also looked at different models for time dependence in music in New Zealand especially for the Wellington and Ohio fold um and it was um um where we um took into account the uh inter event time distribution of um of events on the Wellington fold to come up with a better model than just the bpt and um oops yep what we did did there is um is basically try to see if we can come up with a uh a combination of bpt log normal and viable um that fits that inter event distribution um better than just the bpt and what we found was that especially for the Wellington and the Ohio fold for which the the data um are are you know it's not like enormous amount of data but there is a good good set of data data available and we found that um that a weighted approach matches the data better um that we find about a hundred a thousand year recurrence on the on the Wellington fold fold and 2300 on Ohio which is uh this is Ohio this is Wellington um and uh whereas um compared to the bpt those numbers are very different and so will definitely result in a very different risk profile for um for the the city of Wellington um and here's my last slide where um I'll I'll focus on the opportunities um don't necessarily need to discuss the challenges at this point you can maybe talk about that later um but um I think one of one of our opportunities and most one of the more exciting things that that we are working on these days is to come up with a better description of temporal behavior of earthquakes we really are moving towards simulated event periods rather than just static event sets what would allow for the implementation of more complex time dependent behavior like clustering and and damaging aftershocks thank you very much thank you so much um great talk we are jumping to the panel discussion for the second session and um I'd like to hear from the panel um there are some questions um answered already uh and I kind of want to revisit that we could either use some of it for the general discussions but I think there is one that is relevant to uh buildings and might be an interesting one to to cover but uh if I may I want to just start with a question that I um sort of has been wondering this large earthquake in 99 in Turkey 7.4 in August and two months later triggered quote-unquote which is a time dependent effect another 7.2 it's a huge damage and loss and by then we weren't quite talking about time dependent maybe um can you elaborate on that column stress was calculated how did it address was it enough what can we do for large earthquakes like this in places very vulnerable like Istanbul or other major cities yeah I know um definitely there there've been there there are a few Coulomb stress models around and uh you know part of that high probability on the on the on the segment of the Northern Veterinary Volt Zone that is basically just south of Istanbul um is is it's not just because it's a gap in the in the record it's also because of some of those Coulomb stress transfer models there's an interesting counter argument where there was an um a large earthquake on the segment just west of of the the the seismic gap and um that ruptured in the early early 20th century that may have ruptured much farther towards the east and that would have taken some of the stress away again so there's there's there's some debate about how long the rupture would be on the segment that hasn't ruptured yet under Istanbul but it's definitely a major concern yeah thank you um Torsten you know Trin which order here to go in terms of the questions this one is for Dr. Joshua and Dr. Wald as to the hysteresis in in the build-up structures and I found that very interesting and you know sort of thinking about uh the degree of reversible versus permanent deformation I wonder if you could clarify a bit as to to to what extent do you have to be near the so the design uh specs of the building to have that hysteresis effect right so I would have expected you need to have sort of nearly collapsed it to move into the ductile plastic part but that's probably not the case and how how far do you know and and what are the design goals that engineers go for in terms of you know in terms of this sort of hysteresis that's a that's a great question let me attempt to share some more light on that issue the examples that I provided uh are essentially looking at code codified structures they're essentially the structures which are built by engineers designed by engineers for certain threshold of code criteria and then the research community have taken those structures to evaluate what happens in a situation where you can have a significant aftershock creating an inelastic demand on the structure in essence what researchers have found that a well-designed structure if it has basically has not seen an even an yield acceleration demand on the structure then it really doesn't make a significant alteration of behavior in a post like a post-main shock situation uh 10 or 12 years ago including professor katsu here I looked at this problem carefully in looking at the main shock combination of main shock and main shock or main shock aftershocks sequences passing through a prototype structure and trying to understand to what level you're basically uh fiddling that elastic phase of the structure and trying to reach into the inelastic phase and once you go into an elastic phase how much cycles the structures are going through in order to deteriorate the strength and stiffness significantly such that it introduces a an alteration in the dynamic bearer of the structure which could be anywhere from period elongation or significance you know lack of stiffness in the structure to go back to the previous stage and that really depends upon the complexity of the second motion that you are exposing the structure to this research is basically evolving and again the the conundrum here is that usually seismologists say that aftershock a small event and you know expect smaller shaking that's generally true from seismologists perspective and the ground motion records perspective you never know where you are on the map including the best of the etas models you know the the way we forecast those points on the map for aftershock any of those can produce a significant inelastic demand on a structure specifically structures which are not uh the tile design you know and you have a large number of them in the rest of the world where you know you can you cannot expect a a cyclic behavior and structures coming back to normalcy they simply don't have that capacity to come back and those structures could be a significant significant problem in terms of the performance i hope i kind of give some yeah let me let me add one thing thurston that and that is that uh the engineered structures are typically elastic and they have this kind of behavior the world has you know buildings that collapse their brittle and they typically do so at just based on the strongest shaking level they can be precarious after they're damaged but brittle structures are the are the real culprit around the world and not the not the engineered structures just to follow up to that i saw some studies using gps in buildings for for response or behavior a long time ago i don't like i know it's now buildings especially important ones heavily equipped for monitoring but is gps still being used or relied on i could tackle that i mean there's a lot of structural health monitoring to typically expensive high-rise structures in a lot of them are in japan there's some in california and they're typically accelerometer based uh and the and the um the well monitor structures have accelerometers and enough floors that you can look at the displacement between the floors which is referred to as the inner story drift and that's the most important measurement of the damage to a structure and potential for collapse so you can do that with gps um on the on the roof but one of the tricks in tokyo or other big cities is having gps as a variety of floors to get the displacements at the very floor levels and you can do that by double integrating acceleration but it's hard to get gps in all places where you may have lots of shadows in a domestic environment or the urban environment okay great um donna yes thanks to all the speakers for your uh excellent talks um i have a very general question and that's uh it would be interested in your assessment of like what's the largest source of uncertainty in these um loss um estimations i mean we heard this morning about um advances in and challenges in um you know understanding uh time-dependent earthquake behavior but then you know adding in all the other factors that we've uh heard about this afternoon um i was just wondering where where within that space you see is the largest uncertainty and you know how do we move forward and trying to uh tackle those not it sorry i think all of the above but researchers have looked at this problem carefully in the loss modeling perspective and and they generally tend to agree that the ground motion uncertainties are the biggest contributor to the fluctuation in the losses that we calculate uh from any analysis having said that uh that problem in my mind is very well studied by seismologists thanks to them that we have a better handle on the uncertainty of the ground motion part um the same level of vigor and efforts still are not happening in the exposure and loss modeling world and many of the loss modeling uh modeling professionals are are basically working like marlene uh you know in the backside and it's definitely the research is happening in closed doors and not so much in the open uh environment and again those things probably would change because of the societal demands but again that level of uh interest definitely exists in terms of what those uncertainties mean in terms of the bottom line again the research by netfield and others highlight the issues and ask the pertinent questions uh about this this kind of modeling framework and again we need to do more before really we can answer such a question let me let me add add to that because it's um it's really vital if you if you have a shake map and you know this shaking right next to your structure you know at least the peak ground motions and that will then reduce you know the probabilistic part of the problem so having a good ground motion measurement uh is really important and you can use either the peak motion or the time series which is even more useful for doing us the structural analysis it and it really depends on whether your question is for one building or for the whole you know the loss for a given earthquake and when you start looking at the single building it's of course how well you know design of the building how well you know the response to the building and how well you get record the motion really close by for a for a whole city though it's it's much more challenging because the ground motions change but also you typically don't know the entire inventory you don't know what the buildings are made of how they're designed or or or the more challenging pace which is where people are at the time of the earthquake for that structure so those are really big limiting factors. All right thanks I'll be for indulging my very general questionnaire. There is one question I want to take before Jeff and Torsten um for Katzu uh how did you estimate losses offshore and what did it include? Um I wasn't sure what the question is offshore means. Well if if the um the chat maybe will um if you could explain the question a little more at the chat and uh in the meantime let's move to Jeff. Okay thanks um yeah I have a question that was inspired by Marlene's talk um and as you were talking about the annual average annual loss I can easily understand why that's a really important measure but then my brain's sort of wandered to well what's the worst annual loss and that's probably not well defined but how much dispersion uh how much can we actually quantify what the expected dispersion in annual losses would be um and and is that something we can actually do pretty accurately when we look at you know let's say actual annual losses uh over time or is that something that still needs a lot of work to uh to to get right? Um so we what we usually do is um we normalize average annual loss so we divide it by the um by by by the exposure so and then we can basically compare um average annual loss globally and and so we have some understanding um well we have you know especially in the high um um high exposure um countries that have also um a very lively insurance industry we have some understanding of what that average annual loss should be because you know um insurance companies tell us and and and so um countries like you know again we Japan um Italy where there's a lot of the earthquake insurance is or the insurance industry has a deep penetration as we say um the uh the the numbers and there's some countries in southern central america especially south america as well we can actually really make sure that we match we match the average annual loss and that's usually also while we are going through the development of model that is our uh basically the the sort of the metric that we use the most just make sure that we have it right um then the exceedance probability curve can look can be you know it it's it's relatively easy to um to just because of the huge uncertainties relatively easy to make sure that you're in the same the right order of magnitude for the uh an exceedance probability curve it's a little bit more complicated in that sense but we have a fairly good understanding and also it's also oftentimes it's fairly intuitive um what your you know the average annual loss the the annualized average annual the normalized average annual loss is called loss cost um what the loss cost values should should look like uh countrywide um and um yeah we we we and so we have a few a few benchmarks and then we we do a lot of um uh relative extrapolation you know of course if you know that the loss cost in this country is this much this one should be higher so uh yeah so we use that to to calibrate our models thanks okay thanks uh actually going back to katsu i think you're going to answer this question sure um i already answered in uh the chat uh but the um all the buildings are on the ground on the ground so nothing is on i mean you know offshore so but like if there is offshore like say wind turbine or like some sort of oil rigs or some something like that and if we have a fragility curve for that then we can calculate the tidal and then also the the wind and then also um the tsunami losses but then the potential of course shaking loss if there's a foundation built onto the the seabed so that that's possible but again my calculation for the calculations down for the houses yeah that makes sense thank you i'll move on to mark yeah actually i was gonna follow on that question a little bit myself um so maybe i missed this but for for hazards like you know a megathrust earthquake which can generate a tsunami that can you know impact people thousands of kilometers away if you think of the sumatra event right there were deaths well away from the epicenter i'm just wondering do these models take into account that wide range of potential losses um or are they really more focused locally so for my calculation uh all done all calculations are down for the local buildings but the um if we do the proper tsunami simulation like you know the instead of Cartesian coordinates we can do the spherical coordinates and then say yeah tohoku earthquake happened and then the wave goes across hawaii hawaii and in the california and for example like some sort of important vessels of boats uh you know harvard in the sunday ever for example then um you know some sort of current uh would you know happen in the in the port and that might damage those boats and then that kind of thing can be calculated so there have been a lot of kind of development in research and then also the application for the best so uh damage uh fragile curve so that's also an important aspect for the commercial perspective jessica hi so i have a the geologist question in this which is how much does data collection matter or how important is it in terms of mapping faults on the ground the paleo seismology record the some areas that we've seen today have these really you know amazing you know southern california records really great datasets but other areas seem like there's a big data gap um in that and how important is it um to be doing that kind of base level data collection or is that essentially within it seems to be within error in the models or kind of account for some other way of the models so i was going to but when we were talking about large sources of uncertainty that would would have been you know fairly close up for me like we don't know where the faults are not everywhere and uh you can you know be assumed that you know exactly what if what if if you know what if you assume that you where the faults are you know where the faults are then um then then you have to basically you know calculate everything else and of course there's uncertainty in all the other components if you don't know where the faults are you're missing a fault that's a huge source of uncertainty and so yeah um it's really important to know where the where the faults are and that california is is is in relatively good shape um but for instance countries like um well i mean there's lots of there's still a lot to be done on in terms of where the faults are yeah thank you well i think uh now it's time to move on to the general session i'd like to thank everybody's speakers a great session and um we can carry on and and sort of answer some of the qna um or maybe the audience or panel members jeff to you yeah thanks so our um the speakers from the first session should be joining the panel here um momentarily um and uh along with those from the second session so uh we'll have uh all of our speakers here and um what i wanted to start out with was actually uh sort of a little bit of a different kind of question we have uh people here on the panel from uh private companies from a private foundation from government agencies and uh academic institutions and i'd like to to throw out sort of the question about about career opportunities and career paths we've got uh uh more than a hundred people uh online uh here who have been uh watching uh these presentations and some of them are your early career scientists or students and so uh i'd like to to just hear particularly from those from the outside of the normal academic or at least the the academic environment or present academic environment um what what are some of the career opportunities and paths that uh young scientists interested in uh applications of of this kind of analysis uh might be able to take so maybe i'll throw this out to marlaine first and uh and then uh maybe richard and marco or or david and quicheor but i think others can can raise your hand and pipe up here yeah um so we there's a we have a large group of uh of model developers um within rms in the california and the london office um working on earthquake or um or you know any other type of natural catastrophe on the science and on the engineering side so we um we have opportunities i won't say a lot because these it's not like we have hundreds and hundreds of people we have you know like dozens of people um on on the model development side but it's it's very um so we have like i mean my team we have um people with the geology is seismology um um geophysics background and then there's this uh the the engineering side that has a has a large group of um of people with uh um masters and phd degrees usually uh from various technical universities um i think unfortunately this year we don't have an intern internship program but it varies from year to year and so there's always i always tell everyone just you know contact me at the beginning of the calendar year and then we can see where it goes unfortunately this year it's not going anywhere hopefully next year we'll we'll get that back up um and then we usually have a couple of people over the summer as interns um but there there are um uh definitely opportunities within model development where you would start as a as a a junior modeler um mostly um learning about the trade because it's it's not necessarily something that you would know before you join but then uh there it's it's i i think it's a there's the job itself is a nice combination of of um learning new things applying what you know and and and and really trying to be on the forefront of science and and and publishing so hopefully um we'll have a few opportunities this year but i usually post the jobs on on various our sciences like ssa and and scat channels so she spoke up there yeah thanks i think if any of our other speakers have things to add also just about what kind of preparation uh people would uh would uh want to to have to to to follow up these kinds of opportunities so just raise your hand if you're on our group here and then and then thanks and then i'll go also to our our panelists here uh in the room uh yeah so morgan i think you raised your hand um yeah at the us just for our work in pasadena we have um usually have student interns pay pay internships in the summer where we bring in undergraduates you can look for announcements on usa jobs in the february march time frame and then for uh graduate students who are graduating we have mental health postdoctoral opportunities they're advertised in the fall so that's our primary way of bringing in new researchers into the uss usually has been in hot postdocs and then the other just thing i want to mention is i mean you can enter from lots of different fields i came in from physics you can come in from another earth sciences engineering geology lots of different areas you can come into this you know seismic hazard analysis from but just take statistics an undergrad or a graduate student please take statistics it's extremely important for almost any other science related field thanks okay uh jessica i think you um yeah thanks jeff so jeff just asked us um a little bit and we just got a bit of an answer but i'm going to push on this a bit more that you know as somebody teaching in a science department apart from statistics what is it that you want to see um what are the training do you want to see from kind of the earth science geophysics realm um for people looking to go into this any you know any additional insight would would be appreciated um i would say programming um so um on a variety of levels both um so at gem we produce you know in addition to producing seismic hazard and risk models um we write software to do all the computations and um then there's a huge amount of uh data analysis tasks and um you know prototyping and so being familiar with with a variety of numerical programming uh techniques we pretty much just stick with python so so you know you don't need to be able to code up like huge thing you know finite element models in fortran or something like that but being able to have um a sort of diversity of programming skills from you know basic data analysis type stuff being able to um uh you know do statistical programming as well as um if you can contribute to the larger models and code bases that we have and having some facility with you know with writing functions and writing tests and you know making nice software rather than just having like a mess of scripts you know sitting in a folder and stuff like that um that's really helpful and and something that's i think pretty outside of what what most people are getting in a geoscience education um the people who come with them with it are generally self-taught and there's big gaps which is going to be you know with a field like psh there's going to always be gaps in people's educations but jessica can we go to uh oh sorry i was going to go to marco next marco and then david yes so in addition to what richard was saying i think it's very important to have knowledge of that analysis because we deal with a variety of of information and so it's very important to be able to handle that and most of all work with that in a quantitative sense um um of course uh it's since seismic cars and seismic risks are interdisciplinary it's important to have a good background in one area but also have a knowledge of other disciplines so if you are a geologist or a geophysicist a little bit of knowledge of the components that are more on the engineering side is important and vice versa that's very important because as we we also learned during the seminar that we had today it's very it's essential to work together to make sure that the engineers are working with the seismologists the seismologists are working with the geologists because that's the only way in which we can improve what we are currently doing so that's and so the students has to be aware that they need to be able to communicate with people with different profiles and be interested in learning many other things apart from the ones that have been learning during their regular curricula that's my and and be interested in in learning new things there is always time to learn new things i i personally think that i have still a lot to learn and i enjoy learning so that's the type of approach they should have thanks let's go to david next yeah i just could hear your question and to follow up on marco i think the multi disciplinary part is really key i i'm a geophysicist by training but what we're looking for and we've got some pretty exciting problems to work on is filling these gaps in these these areas that we identified as the limiting factors and i would say you know on the on a data set side of things there's all sorts of geospatial skills and mapping some of the newest technologies we have like building footprints for the entire planet now but we don't know what's in them but we we could actually use machine learning AI approaches to try to assign those structures based on what we know in different countries in terms of the inventories so geospatial dual engineering analysis that that's necessary to understand those structures but then a lot of the time and matt gersenberg we've got on this quite a bit is communicating what we're doing in terms of losses uncertainty and and very difficult you know post earthquake environment where we estimated the fatalities we need to communicate these and so we need people that are trained with the science but also have a sensitivity to what the user needs are to try to best get these out in a useful form and then just lastly we're really excited about expanding the importance of our products in the post earthquake environment to follow up later in time and be more useful for downstream decision-making and so we're going to be trying to take in ground truth information in SAR data all sorts of observations that are typically not used in our initial model development to try to update those in a in a recursive way as a function of time but the expectations are that these things get better and they get more precise and so people that can cross these disciplines are going to be really fundamental great so let me go to Donna next great I had a question that actually follows up on what Jessica was asking at the end of the last section and that's on the kind of opportunities and challenges of including more constraints on from paleo seismology in time-dependent earthquake models it's one of our most key constraints that addresses the shortcomings of the characteristic earthquake model and is a constraint that can help us with the multi-fault rupture scenarios that Morgan was describing and so I was just curious on your thoughts on that um the paleo seismic data yeah it does help if we have enough of it it's very difficult with only three or four events to put any constraint with the long-term rate of those large expenses especially if you don't know how big they are so that doesn't have a strong effect on the model is there are lots and lots of events um certainly the geologic slip rates that are collected they actually does have a stronger effect just on the the final that's just like how often the toll is you know the total the whole moment budget of the earthquake is goes into that so that's super influential and then the next level of influential I would say would just be like the connectivity not necessarily for faults that are well connected because it doesn't necessarily hazard the sun dependent on like exactly which earthquake it is because we don't know anyway but for a fault that is otherwise very isolated and would have no way to rupture with another fault unless there was a connector that could be discovered that would be very important because it would actually change the maximum magnitude of the rupture so unfortunately you have a lot of events it just doesn't impact the model that well the paleo seismic. Okay Richard you had a follow-up yeah I would say that um uh in my perspective so the main thing that I do is build fault source models for well all over the world um the paleo paleo seismology and and neotechonic slip rates geomorphic rates are incredibly valuable um and you're not going to with with one or two paleo seismic events you know like Morgan said you're not going to be able to really super well resolve questions of you know like what kind of recurrence distribution do we want to use and things like that but for so much of the world um uh virtually no you know you know one or five percent of the faults have even been investigated and so and when people investigate them obviously they like to go to the fastest structures these are the you know the important ones for understanding regional geodynamics and stuff but um what you know what you find is that you'll have you'll have an area that it's got you know seven faults and then and every you know there's 27 studies from one of the faults and no one has touched the others and if you're really trying to understand the geodynamics that might be fine you know what really happens is that there's a big argument and so you know so you want to write the paper that resolves the big argument but like from a hazard perspective that resolving that argument is great getting a better rate is great but you know not ignoring the vast um uh you know see a very obvious but unstudied structures is really important for helping us really understand what's going on in the system there are also things like um understanding in the past what kind of multi-fault ruptures have occurred and um so looking at splay faults you know trenching on splay faults and trenching on um you know potential connectors mapping of kind of distributed deformation zones in between uh principal faults um I think it's really important for for being able to beat down that uncertainties in areas that are poorly studied okay thanks so let's go to uh Ringen thanks um I guess maybe this is follow on to Marco and um uh Marco's uh comment about data so I guess when we talk about education or maybe having young generation uh learning this and learning statistics programming using the tool you're using geospatial analysis those are great but I think the main um backbone of the whole work is is is data like seismic data uh what are the um accuracy levels of seismic data that gets into psha either time dependent or independent um what are the location issues where the catalog was collected from what are the magnitude relationships that are contributing to the end results those are kind of in the educational level maybe something to also go in parallel and not just learning the code itself but really important how the logic trees are kind of put together and um the data is captured correctly the current recurrence intervals the completeness I know it's kind of more generic to to most of you here but uh people who are learning how to do this this is very crucial um I think to to kind of bring on thanks uh Morgan yeah I just wanted to follow up on that comment that I I agree and I think just even beyond like generals is statistical knowledge just the way to think like a scientist in a way that we even like the easiest person to fool is yourself and it's like not just learning like the basics of hypothesis testing here's how you do a p-value and all that but how to run our research program or to study a new phenomenon where you don't know the answer in a way that's like being faithful to like yeah not just the uncertainties of the problem but am I even thinking about it right am I yeah basically am I not fooling myself and like trying to come up with a good story for like this exciting research problem I'm trying to do am I actually being totally honest about what I know and what I don't know I think something you start learning as students and we're also working on it right because um the science is hard and yeah yeah that's a it's a great point it reminds me of a of a fine Richard Feynman quote that that the easiest person to fool is yourself so you know once you stop fooling yourself then you know that's that's that's the first step to to learning anything um let's see let's go to uh Torsten next uh I guess a question for all panelists but um perhaps inspired by by Matt's um description of the government responds as to mitigation in an impressively um you know uh in an impressively effective way perhaps but also um you know Katsu and Gora-san's um illustration where for his study region he showed that the uh taken tsunami has it into account really shifted the the lost curves dramatically and I suppose for for this particular setting we have a flat plane with pretty good building codes so there it was sort of fairly straightforward to say well some tsunami wall would have sort of theoretically helped but I wonder in terms of going from those explorations to having national and international um uh sort of return on investment considerations for mitigation where where do we are right where do we are in terms of using um physics or statistics based models in terms of saying where do where's the best bang for the buck in terms of reducing loss of life and loss of infrastructure I would say that at the global scale there's still a lot to do because uh when you start to um to enter into these arguments uh you you immediately filter out uh 90 percent of the countries that you have in the world unfortunately um very few countries in the world at the moment have a national seismic risk model for example and not just a seismic hazard model and for taking decisions like the ones that you were mentioning is essential to thinking in terms of risk not in terms of hazard unfortunately and most of all even for the other many many components that maybe you think are important when you calculate the hazard they get irrelevant or less relevant when you move when you shift from the hazard to the risk so and since the risk is emphasizing the application or is closer to application um having a strong link between the hazard and the risk is essential and I don't think that we are yet there for at least from my experience there are many efforts there are certainly countries that are trying to move forward in that direction but there's still a long path unfortunately okay thanks I'd like to go back to a question that was uh it was put in the q and a or the specific question was about us canada across border and that was answered in the q and a so people might be able to see that but I wanted to ask the question a little more broadly um when we look at at estimates that come even of well certainly of time dependent but even time independent um estimates um done for different regions or different countries um how well do those estimates blend uh at the borders um or not um well uh it varies a lot of course because uh because uh um uh clearly every country is taking is taking uh um different strategies in modeling the hazard uh different decisions in in accounting for the epistemic uncertainties and that uh clearly has an impact on on the pattern of the shaking expected shaking that you calculate but if you look at in general the the global hazard map that is indeed the result of the combination of 31 models and the corresponding maps and it's true that there are differences but the overall pattern I would say that is um relatively consistent between the different models clearly there is a lot to do to improve for example something that we've been discussing years ago when still was possible to travel without many problems um so during a meeting that was involving representatives from the USGS, GNS Science, Japan was for example to develop benchmarks for testing the ways in which we are building components of another model because that could help in um homogenizing and trying to standardize at least the the fundamental methodologies that we are using for building other models and that would go in the direction clearly of having models that are more and more homogeneous okay but um um as you said at the moment unfortunately we still have uh differences and and not just in terms of the model but also in terms of the information that is used to build the model because clearly go on sorry that's great no I'll just add that uh Mark would just cover the hazard topic when you think about loss and risk those differences are even larger when you think about how the the way buildings are designed historically as well as the way they are built you know depending upon the country's history and the construction and design practice those differences are even larger when you get to the loss and risk side okay thanks I next let me ask a question that came from the audience another one this comes from Gerald Bodden at NASA and uh he says that NASA will be producing a PSINSAR deformation map of North America uh the goal is to do all of North America um through using Sentinel-1 and NISAR data uh so this can be combined with uh GNSS data from uh the Earth Scope network of the Americas and will provide very detailed space geodetic spatial and temporal changes in the land surface so uh David had mentioned that INSAR data can be included in his approach but in general how can these sorts of data sets best be used to support and be integrated into earthquake risk assessment so a comprehensive statement yeah let me just clarify that the use that we were talking about is for post-event um situation awareness with change detection and that's that's the different beast so I'll let someone else answer that I've been working um with Tim Wright uh at um at Leeds Unincorporating um INSAR into um and they produce I think similar to the the PSINSAR strain fields that would be uh discussed here uh very high resolution broad scale deformation maps that I've been using to um uh incorporate into geodetic geologic geodetic block models in order to get fault slip rates and then from the fault slip rates then you have to go to um uh you know then you make hazard models and then you know and that gets transferred to risk so there's you know it's that's that's at the very beginning of the the sausage making process um but I think it's it's still pretty valuable um as long as you have have a way of incorporating geotasy into um understanding of earthquake rates um I've been using block models because I can you know cover a lot of ground and still have pretty high accuracy where the faults are um there are you know one could one could also use the methods um like that Peter Bird's been working on where you take a strain map and then assume every pixel is a seismic source but um it gets really difficult with those in order to properly account for um uh strain localization on geologic faults and you know and particularly when we get into all the things that like Morgan was talking about where you have multifault ruptures and um but you really want the slip rates on the faults to guide where you see those um those ruptures versus just having you know a raster where like you have a big earthquake in the cell you know and maybe not on that one and so um so there needs to be some sort of framework in order to incorporate the geotasy into a fault-based model um really understanding off-fault deformation through NSAR would be awesome um uh you know when combined with the fault model yeah thanks and Morgan mentioned earlier the importance of the moment budget uh and so on so in places that don't have as much let's say you know ground-based data uh if we had a very thorough strain map how how much does that improve our ability to uh to actually make an accurate assessment dramatically so um so for example since since apparently all of North America is being covered I've been working on models for both Canada and Mexico well and going through the whole U.S. and the U.S. part is covered pretty well but once you get north of the Canadian U.S. border you know the density of geotasy coverage really drops dramatically and there's some pretty big arguments on over whether there are even active faults in the BC mainland and um other than up in the McKinsey's and um you know and so NSAR in places like the the interior the Fraser River Valley Thompson area of the British of British Columbia would dramatically increase our um ability to to decide whether we even want to consider these giant mesozoic strike slip faults that run through there as potential earthquake sources okay thanks uh I'm going to do uh one more question from the Q&A here but I'm going to generalize it a little bit um is PSHA already old is this the framework moving forward we have to look for a different framework at some point to to uh to better bring in the sort of time-dependent uh problems or or is it more a matter of tweaking and refining uh and adding some enhancements to what we already have uh Morgan first um yeah I think like I mean after going through the user process yeah I I do wonder if there is a better way if it's not something like simulators for sure just speak just to give them to include everything we know has made this what started out as a simple model blossom into you know something you know extremely complicated to put in like all the multi-fault ruptures that we need we needed hundreds of thousands of ruptures and then we had an undetermined inverse inverse problem that we had to regularize in order to get you know some sort of sensible solutions out of because there's no way to constrain the rates of hundreds of thousands of ruptures we get things that are like we can't individually constrain rupture rates we can get magnitude distributions that are somewhat stable at least with this method but there's all this physics we're leading out that we're just having to add more and more like um statistical rules to like force it in there and so yeah I'm interested to see if perhaps one day simulators could take over this role if they didn't have so many um free parameters if there's a way we could like actually extract something sensible from the that we did before us um then I would be very excited about it uh Dave yeah uh Diego knows I always look for the elephant in the room and there is a there's another beast here that we're not touching and that is earthquake physics so if you have exact if you told me exact history of the faults in them in California I wouldn't know about the next earthquake because uh rupture dynamics can control the outcome and you know we've got places where earthquakes rupture into places that have no stress we've got places that are stressed that don't rupture when they have you know like Parkfield had these enormous high stress drop events right in the preparation zone and it was we know was ready to go and didn't didn't rupture um so we have a very big challenge in the time dependent component that that um requires bringing rupture physics into the antique equation even adding a whole set of new variables to morgan's already growing list but if those physics space like if we could somehow have a physics based simulator that you know I mean RSVCM gets some of it but like actually produce all of the statistical laws that we're putting in by hand now but they just came out emergent from the physics space similar that would be great because we would actually have less hand tuning to get accessible results which is my goal would be like less tuning less forcing forcing the models to do these things that we know it should match ideally we put in what we know and it would all written because we have put in the physics they would match all of the scaling laws that we know and the behavior that we see thanks I'll go to Marco and then Diego and then I'm going to have uh Jessica ask sort of our last new question so uh so Marco so to go back to the question of of PSHA and if PSHA is dead I I don't think it's dead actually it depends probably the way in which we think about PSHA but if I intend PSHA as a framework to integrate various information and so in in that sense I think that everything we've been discussing right now can enter into the construction of more advanced models for performing PSHA analysis in my opinion what you were saying morgan for example so the output of a simulator I see exactly as the output of a of a simulation or a stochastic event set that can be integrated also in a PSHA framework as well thanks Diego thanks Jeff um I guess what I take away from from this meeting is that yeah the time dependence is hard but there's good reasons to be hopeful that progress is being made but I want to ask a related question to this we were just discussing and it's something that quiche or you said when we met before this workshop for all the speakers to organize themselves and it was that the engineers don't want the hazard to change with time because then you have to change the building code with time and there's other applications for changing hazard with time might be difficult like Oklahoma sure you made a one-year hazard map which is now expired um so how do you deal but it can come back at some time in the future so how do downstream applications how do we expect them to deal with this time dependence well there is a sector that definitely has a lot of appetite for time dependence and it's the sector that is represented by Arlene so all their insurance and reinsurance sector is definitely very very interested in having time dependent models so generally work with a variety of communities and definitely the one that is working on in the insurance sector is the one that is asking more and more about time dependent models far more than the engineering sector for the critical facilities as we were trying to explain time dependence it's not that relevant because you look at periods of time in which time dependence is kind of kind of fading but in the insurance sector where where they look for renewals of the policies that is in the order of one year or a few years time dependence is essential yeah that sorry that's also why you know the moving to and I agree with but Morgan about that moving into a more simulated setting is is really important because that is as far as we can tell right now is like the the main solution to to you know to the problem of incorporating time dependence appropriately and but I agree it's it's not necessarily the level of of shaking it's about it's that when when that level of shaking is going to start okay key sure I think you're about to say something yeah I would just say that I think yes insurance sector is very much interested in this problem Marco but I think over the long history of this earthquake problem as we all have learned looking at earthquakes looking at damage I think we have broadly told the community or the consumer of our research that you know this PSHA helps you to design buildings for future you think about earthquakes as you know this kind of beast which will produce strong shaking it will it can bring buildings down things like that but since the time dependent has evolved the thinking has evolved over the last 10 15 years predominantly I would say based on the Canterbury earthquake sequence which really surprised a lot of people and professionals in the community about the impact of you know this main shock and aftershock and how it could influence the the downstream users of these products what I would say is that collectively we need to educate people now back again in terms of earthquakes don't know this dependent and independent thing they just happen it's just a hazard you just need to deal with and you need to basically communicate to the users and when I'm saying users I also include general public you know there is a large effort goes on when big earthquake happens people try to communicate this four shocks and then they call it you know main shock and then aftershock like Morgan was suggesting it's pretty confusing to common people like what exactly it means you know and if you think about like engineering community also engineers have believed and relied on seismologists and geophysicists to give us the best science available on how to think about earthquakes and shaking so engineers can design their buildings but this this this education need to basically have to happen at accelerated pace if we really need to think about this time dependent issue in a more holistic sense. Okay so we do have a number of other questions in the Q&A we're unfortunately not going to be able to get to all of them but one sort of final question topic so I'll turn this over to Jessica. Thanks Jeff so I had a question for everyone and particularly Mark Marker and Richard which is funding sources for this type of work you know I understand that some of this is coming from insurance and other users of these products but in thinking about how we generate science that is open source and accessible to many people to use what are the other sources to fund this this type of work and if you guys can comment on how you see this moving forward from that perspective. So I think that the open source component of it is really important and the biggest way to lower costs is to to simply make things public and you know and not only and both that means public data as well as public software and not something that maybe you know the source is available but you're not allowed to modify it but you know but having both software licenses and data licenses as well as practices that allow people to collaborate and allow people to to share and build on on what's out there so both open SHA that the USGS uses for California and as well as their new software that's and their previous software for the rest of the US those are those are all open source and they're very you know easy to build on software that Gem builds is is also open source and we have tons of users all over the world who are able to collaborate and and and and work on things much more cheaply than they would otherwise I think and able to you know to add two things so that everyone receives the benefits of of the work that individuals put in. In in terms of funding sources for pure research and I don't know you know that that's still just very you know government and institutionally driven Gem is supported it's a public private consortium where we have some we have a lot of mostly government sponsors different governments throughout the world and as well as private sponsors such as you know what's all of the other panelists are you know involved with Gem in some way but but you know there's it can be very difficult for Gem to work with groups in the US National Science Foundation hasn't been interested in providing us with with funds because Gem's based in Italy and so so sort of those sort of institutional barriers on the research side or funding barriers on the research side that deal with you know who's receiving funds and what country can can inhibit international multidisciplinary research so I but I don't know what other like pots of money may be available that we can tap into other thoughts from our speakers all right well we're we're almost to the very end of our time we've had the the the final item on the agenda I'll pass this on to Torsten who is he got the job of of summarizing our afternoon in four minutes we've heard a remarkable beautiful and diverse set of topics as to the time dependence of seismic hazard and I'd like to thank all the participants all the speakers for their contributions it's clear that some of the questions that are still challenging us are old ones what is the degree of the validity of the seismic cycle how do faults link up what do we do with the faults we don't know about but what's new I think is that we're seeing the insights from those long-standing questions really to make their way into our probabilistic descriptions of hazard and recognizing that the time dependence is a universal problem and there is really no steady state on any of the time scales considered and one fault might be characteristic in one time it might be clustered in another and and it's certainly not your grandmas seismic hazard assessment and it's clear that the the problem is a is a is a global one that goes across boundaries really in every sense of of the word and it's clear that it cannot just be purely statistical or purely physical in terms of the modeling and it's clear that it cannot be a national effort it has to be a global one an international one and it's clear that it cannot just be a geophysical one geological one we've heard you know from implications from rock mechanics and to financial products and and we need to address this issue with with global ramifications from in a collaborative approach you know crossing academia to NGOs to to the private sector to the government and and I think the the opportunities are clear in terms of bridging these traditional boundaries it's not just seismic hazard assessment versus mitigations those two are clearly linked and they're they're they're real opportunities in that time dependence because the system is coupled and the physical models that we use to improve the statistical description have to work across time scales and so understanding the present day state of a fault zone in terms of its implications for hazard requires understanding the physics of how fault works and those physics have to be the same in the long-term evolution so there are real opportunities to to make headway here and there are opportunities in terms of solid earth contributing here in terms of enhancing the training for students so that they they make for better contributions in the future enhancing not just the understanding of statistics but also of of programming of data analysis and and building those models that are able to factor in the amazing new constraints that we have to get at the state such as from geodetic observations on the verticals on the horizontals and to make headway on the integration and so I think those are exciting times and the challenges are clear and some of the answers will hopefully lead to a better understanding of hazard for the global community so I'd like to thank all the speakers again and all the contributions and the material from this session will be available on the national academy's website and that's it for today so thanks everybody