 Well my name is Mark Bain and I'm going to be moderating our third panel and it's on scientific opportunities and challenges in studying precursory phenomenon. And it'll be structured a bit like the first panel this morning. We'll have two speakers, Morgan Page and Laura Wallace who will be speaking remotely. And so we'll have 15 minutes for the talks, five minutes of questions after each individual talk within like 10 minutes for joint discussion at the end. So without further ado, Morgan Page is going to be speaking about the 2016 Bombay Beach Swarm and its effect on the San Andreas Falls. Okay so I'm going to start just by talking about the basics of the play forecasting from the perspective of the social seismologists. We have a number of very well-known scaling laws. So of course the most famous is the Gutenberg-Richter-Mangatou scaling relationship. This isn't working. And we have the Mori law which tells us how aphor shocks decay in time following the main shock. We have Utsu scaling which says the bigger space on average trigger more aphor shocks so people at the beginning of the magnitude have about 10 times as many aphor shocks as they trigger. And we have how aphor shock rates decay with distance from the main shock which like all these other things follow the power. So you can combine all of these scaling laws together and these are used in models like the Epidemic type aphor shock sequence model or ETOC. This is also used in step. These are both types of cascade models which Emily talked a bit about this morning. So cascade models like ETOC predict a number of things that match the specific seismicity catalogs very well. One is that the foreshocks and aftershock rates follow the same trend. So here you can see basically the numbers of these are aftershocks to the left of the dotted line and foreshocks to the right of the dotted line where in this case the aftershock is actually bigger than the quote main shock which triggered it. And you can see it all follows the same trend if I hadn't actually shown you this label of the x-axis you wouldn't know where to put the dotted line. So ETOC matches this. Also there is not a significant correlation between the number of foreshocks and the size of the impending main shock. So the solid lines here show rates of foreshocks and the different shadings are for different main shock magnitudes. You can see that they all overlap each other. Whereas aftershocks because the dotted lines are a different story we know that the bigger the main shock the more aftershocks you have so that's a very clear question. ETOC models can also produce what's called inverse the Moray acceleration. So if you stack a bunch of foreshock frequencies for several main shocks you get an acceleration coming up to the time of the main shock and this is actually a consequence of the Moray law in the same way that most aftershocks occur immediately after the main shock can indicate most foreshocks occur immediately prior to the main shock with a smaller probability that it will be facing time farther before the main shock. So you get this try. All of these things suggest that foreshocks are not predictive of main shock size. They're very predictive of what the earthquake rate will be and of course when you have higher earthquake rates you have a bigger chance of a big earthquake but it doesn't mean that the predicted that any particular earthquake will be large versus small. And this makes sense if you consider if you look at moment rate functions like this famous figure from men's study a few years ago the moment rate functions of large or magnitude 7 up to magnitude 8.5 earthquakes the moment rate functions all look the same at the very beginning with similar slopes and they're also self-similar so maybe that they start off the same. So this is showing an ETOC prediction for what the instantaneous rate of earthquakes is in time in the blue line compared to when magnitude 6 earthquakes actually occurred in this synthetic catalog and the average rate is shown in green. So in a model like any possible model you get wide fluctuations in seismicity rates that are ordered with magnitude but they're short-lived. So you get big spikes after big earthquakes that decay according to Moria decay. The other thing I want you to notice is that the time and quiet the chance of a big earthquake is proportional to the rate of all earthquakes in quiet periods is actually lower than the average rate. The people sometimes use long-term rate interchangeably with background and it's not really the same thing. Here in a quiet period is actually a lower chance of having a big earthquake in a model like the Tufts Mall. So now to talk about the bondage each form in 2016 this occurred on the southern end of the San Andreas Fault in the Salton Sea in a location very close to where previous forms had happened in 2001 and 2009. This form started off with a few smaller earthquakes and then had a magnitude 4.3. About eight hours later there was a double at a 4.3 and a 4.1 and a lot of smaller earthquakes and within about the it died off pretty most of the way within a few days and within seven to ten days it was over. So we at the USGS we were asked to CPAC was convened and was asked to put a numbers on how likely it was that this form being so close to the San Andreas Fault could trigger a larger event and the USGS was interested in this as well. The two main certain things we had to deal with with putting numbers on this were how long the form would last and what the appropriate magnitude distribution was to use for this. So the first how long the form lasts had a big effect on your model. Here I'm showing a Utah model started after those three magnets before earthquakes. So spitting the seismic use occurred so far and then as extrapolating out and with the larger time scale into the future using the assumption that the form will continue indefinitely as this new background creates. And the bottom model is the same thing at the beginning but then we assume immediately the the background rate of earthquakes is going to be paid to its free form level. So these are two in numbers and you can see just you know if you go out to even 10 days into the future it's a very big difference and the chance of a big earthquake. And so we didn't really know at the time which of these in numbers was right and we had a range. Following the swarm work was done to quantify how long forms lasted in the area and Angelina's and McVanderals worked on this and they found that you can fit swarm duration in the column clock with the croissant model which means that there's a constant probability of any given day the swarm will end. It's sort of memory-less. It's been 10 days and the swarm is still going. This is likely in on that 10 days of live on this day. And the 15% chance of terminating each day that gives an average length of forms in the column is about 70. So we can use this information for future forms to better get around this problem of not knowing when the swarm will end. We can handle this problem. Our other big uncertainty is what is the appropriate name to distribution and to use given that we're so close to this locked and loaded fall the San Andreas Falls. So going back into the 80s people pointed out that on the rate of large earthquakes on the south central and southern portion of the San Andreas Falls is fairly high compared to the rate of hyphenicity. You just look around the fall through small earthquakes that we get in the instrumental catalog. And in our model this is showing predictions from the USWF-3 model which actually has a characteristic distribution of earthquakes on the San Andreas Falls assumed which means that instead of a Gutenberg breakthrough relation, the magnitude distribution on this red fall here comes down like the Gutenberg breakthrough relation first and then has a little bump up around magnitude 7. So a higher rate of big earthquakes than a linear extrapolation of small earthquakes would predict the future of the higher magnitude. Interestingly, USWF-3 also actually has the San Jacinto Falls as an anti-carrot earthquake because of the opposite reason there's many small earthquakes near the San Jacinto Falls compared to its both its long-term flip rate and the failure rate of large earthquakes. So because of this USWF-3 predicts following earthquakes of similar size in the magnitude 4 range near Bombay Beach or near the Sloppy Town in the San Jacinto, the chance of triggering a big earthquake is in order of magnitude. So it's a very big uncertainty. And I've also looked at this looking at a broader region along the San Andreas Falls. If you look at different catalogs in different regions you can see the number of earthquakes per year and catalogs covering different time spans versus magnitude that there are disagreements. So the modern instrumental catalog has a rate that's above the early instrumental catalogs actually and a rate that's lower than slightly lower than the historical catalog although there are a certain few there and is also lower than the paleo seismic earthquake rates shown in the screen. So one thing to note is that if you just look at the instrumental catalog alone, we do not see the earthquake near major falls are more likely to trigger larger themselves or even more aftershocks. So if you just look at this is looking at data from 2004 to 2018 sorting earthquakes in their distance from the nearest fault in the Kiskex community fault model and basically you shouldn't see any coherent signal here. If the earthquake is very close to one of the major faults in the CFM, it doesn't appear it's more likely to trigger something big it's not more worrisome or more likely to be a foreshock. That being said this is the instrumental catalog and it only goes up to about magnitude 67. So if there's a scaling break it's happening up. So the USGS put out a forecast following the first the three magnitude four events that happened in the storm and we gave this range of probabilities. Chances of 0.03 percent to 1 percent chance of a magnitude seven earthquake the next seven days triggered by the storm that's going on. And this range of factor of 30 is pretty large and it comes from those two main uncertainties. What is the right magnitude distribution and how long will the storm last? This is showing basically the response during this forecasting period. It plotted against an ETAAS prediction of what the rate of earthquake was during the storm. Here's that first magnitude four. After the first magnitude four we began modeling the earthquake required at least three. We began running a bunch of simulations on supercomputers. That modeling period is this gray area here and it wasn't until the next morning after there were now three magnitude four events CPEC convened. The USGS began drafting its advisory and it's actually the first advisories were released out here. When we're out here you can see that the aftershock rate has already paid quite a bit and that means the chance of the magnitude seven earthquake has already paid quite a bit from its peaks. So this really highlights the need to get out these advisories to emergency managers in the public very quickly because what that more decays is what we're babbling. And the only action I know that was taken by emergency managers because of this because of the advisory was actually the closing of family reasons to be caught. They were very worried about a large candidate event which would be directed right toward them. That happened all the way out here where a lot of the aftershock probability of head decay closer back. Here are some plots showing the user-free model prediction for the boundary through storms. This is an average of 100,000 simulations. This is showing the average of all the simulations of where small earthquakes happened and you can see the fault sliding up, getting some fraction of the simulation large earthquakes occurred on this hand-and-dry fault and then those triggered small aftershocks. And this is showing and coloring of the faults in the user-free model colored by the rate at which they participated and a large rupture in one of these simulations. So some of the faults are blue and that's because they only ruptured in one of the 100,000 simulations to the kind of the noise but other parts of the model that are well resolved can see the decay and probability of earthquakes getting larger and larger between the big bend and the odd. And this shows how the user-free forecast for how the chance of a large earthquake decays to the long-term rate in time. So the black line here at the time independent model for California, blue takes into account elastic rebound and it's higher than the black because it's basically taking into account the fact that it's been 300 years since the large earthquake on the side of the fault that's raising the probabilities and the purple shows the additional probability gain that you get from including the black. So to conclude, during seismic storms the chance of a large earthquake contains by orders of magnitude but this can be a short list change because that's the main the need to get these probabilities and do that quickly if you're going to be useful at all. Proximity to major faults is a concern that's why this type was convened for these particular events and not other magnitude 4s in southern California. But it's still not proven that proximity to major faults elevates the foreshock probability. I have one question and it kind of relates back, we haven't heard a lot about this today but you commented on the Poisson distribution assumption. So what does that say about the actual physics behind what's governing the swarm? So we talked briefly about rate and state friction earlier on but that seems like I mean so that's a model that clearly has no physics in it all. Is there ways that we could improve on how we're trying to estimate the duration of swarms and things if we're actually start to take in the physics behind how the ruptures are happening? It's very strange. I don't know, I don't have a good understanding of what physical process would lead to Poisson. I would expect most things you could think of would have a preferred duration something. Yeah but we that paper also looks at sinosynthose swarms which are I know this is a geothermal area maybe sinosynthose swarms are different but they're also well-fit by Poisson model with a different mean duration but still a memoryless yeah I'm not really not really sure but I'd love to hear your ideas. Yeah well and that was kind of the next just related to that but to a thermal and the fluid involved processes do you do you actually or have you looked at swarms in areas where swarms are typical behavior and I suppose the physics there the physical explanation for the Poisson distribution or a more random distribution like that is sorry there's a physical model have you looked at any of the other swarms in a similar way associated with magma or potential magma intrusion or the geothermal fluid pumping activities to see if behavior is any different because I think that sure we don't know the answer but seeing if this type of swarm is different for one that we feel this fluid involved with it there might be a message in that as well. Yeah I've actually personally modeled swarms. The main thing the main work I know of about that is the Lino's and Vandross paper which is just looking at two areas in California. I do know that even swarms that are whether the swarms are triggered by some natural process or they're induced you can model them in a similar way which is basically just changing the Utah's background parameter so they're modeled in a similar way but I don't know if swarms elsewhere have them have the same memoryless behavior that we see here. Yeah well they don't follow a Mori's law and they're not as well behaved. Yeah there's some change in the in the background rate of earthquakes and then you have you can model them well by adding a Mori's law on top of that because you can still see the Mori spikes in the course. So your conclusion point is that we can't conclude that proximity raises the probability of rupture around the fault and yet the simulation showed faults lighting up. Yeah we're kind of so I think most seismologists when they see an earthquake near a large and little bit bulk our bulk pressure rises a little more than we see earthquake so I think most of us have this sort of prior information that things are characteristic. The other. So those simulations assume that the long-term rate of small earthquakes near faults like the San Andreas fault matches the historical catalog. If you if you want to make the model if you think instead that the even faults are Glutenberg Richter instead of characteristic then you have to have a model that has big rate changes in it. So you need to have some process after shock triggering or maybe something else something longer term that leads to changes in rate and that's one of the reasons if we can go back over there it's working that I showed this um because I think this is very suggestive of changes in rate of course you can argue that maybe these magnitudes are poor and that can look like a retouch um but it looks more to me like there are just different periods of time because none of these catalogs overlap in time except for these two will purple as a model another catalog. It's very suggestive that um instead of having a scaling break so things not be linear when plotted this way but actually the A values are different and then when I think and to me that makes a lot of sense because well all of California has been really quiet in terms of large earthquakes we all know it's been a hiatus on the major faults lasting a century. So the other in-member um hypothesis is that it's not a break in scaling but that right now the San Andreas is quiet at all lengths and you would not get the use of three answer if you assumed that you would get just a sort of vanilla etos answer you would get and we that's why that's another reason we had we had models that did both we ran both regular etos for the the boundary beach form and we ran user three etos which had this added ingredient gradient of a characteristic distribution on the San Andreas but we don't know what is right. That's right thank you. We'll table further discussion till the end and we'll move on to um Laura Wallace um so Laura are you there yes I am great um let me share my screen can you hear me yes we can great okay and I'm at home because it's early morning and hopefully my dog doesn't try to join in he might be better at answering some questions than I am but um okay so I'm going to talk a bit about um pretty spectacular and widespread triggering of slow slip of events we observed following the November 2016 Kaikota earthquake um in the New Zealand region um and a little bit about uh some of the implications of this for earthquake forecasting and some of the work that we did to try and address how those events influenced probabilities of earthquakes going forward um in the central New Zealand region I particularly want to um acknowledge Matt Gerstenberger who spearheaded a lot of the earthquake forecasting work I am a statistical seismologist I'm a geodesist so um okay so just to introduce you in case you're not familiar with New Zealand's tectonic setting um the Kaikota earthquake I was magnitude 7.8 earthquake happened um 2016 and it ruptured a really complex series of faults which are um in the northeastern part of the south island in the Marlboro region which occupy a very complex tectonic transition from strike slip and collision along the alpine fault in the central south island to subduction at the Hikarangi trough uh beneath the north island and northern south island so we're we're talking I'm going to be mostly talking this talk about the slow slip events that were triggered on on the Hikarangi subductions and following this earthquake um so you guys might have seen this before but um the Kaikota earthquake was a pretty spectacular kind of mind-blowing event um it was highly complex and ruptured at least a dozen different faults possibly more probably the most complex arguably arguably the most complex earthquake that's been observed in the modern period um this is just um showing um one of the models for the for the earthquake this is from Ian Hamlin's work um the lines are showing the crustal faults that are thought to have ruptured and then there's also some possibility that the subduction interface beneath Marlboro also slipped in the I think oh that's a the extent to which that happened is a major topic of debate um regarding this earthquake but the earthquake initiated in the south on a fairly low low slip rate crustal fault and probably to the northeast over about 150 kilometers and finished up its journey on the needle fault offshore the northeastern south island and that basically stopped about 30 kilometers south of New Zealand's capital city of Wellington so we were right in the firing line of this but we're very lucky that it to wrap up before it reached Wellington although there was a fair bit of damage in Wellington from the earthquake um in addition to the earthquake being really incredibly surprising um what we observed following the earthquake was almost as surprising as well um the the weeks and months following the earthquake um there was a slow slip triggered over much of the Hikarangi subduction zones slow slip regions um including along the east coast so this is um I don't know if you can see my arrow here the east coast um slow slip region off the eastern part of the north island that triggered um that's that entire length of that that slow slip source area for about two to three weeks after the earthquake um the slow slip region was also kicked off that's the one that's just northwest of Wellington that proceeded for a little over a year Pappity events tend to um last for about a year and the east coast slow slip events tend to only last for a few weeks um and so they they followed that um previous behavior and then we also observed a large amount of afterslip on the southern end of the seduction zone beneath the Marlboro region um sort of just down dip of and in the area that some people think that um ruptured in the earthquake so this is about a half a meter of um so um the east coast slow slip was about equivalent to a magnitude 7.1 as was the Pappity the moment released and the afterslip we're looking at about magnitude 7.4 equivalent moment this was of great concern to us because um on the right hand slide I mean on the right hand figure showed this estimates a slip deficit rate from campaign GPS measurements acquired over the last 15 to 20 years um the red areas of the areas of high slip deficit rate where we think that the seduction interface is locked and building up stress to be relieved in future of yeah to be released in future earthquakes um and you can see the Pappity um the Pappity slow slip region the Marlboro region and the east coast um slow slip areas really encircle that locked zone um so this may raise some major concern about potential for a megathrust earthquake on the lock zone in the sort of years following the Caicota earthquake um one of the other things that raised some pretty big concern is as the east coast slow slip event was unfolding during the two to three weeks after the earthquake we had quite a lot of seismicity excited on that um in this shallow seduction zone region including a magnitude 6.1 thrust event um you can see in that lower left panel there these are some time dependent inversions of the um progression of the east coast slow slip event um you can see that focal mechanism there that was a 6.1 thrust event a lot of characteristics of that event suggest that it probably was on the plate interface so that that also raised a lot of concern that was the largest um earthquake that we'd seen accompanying a slow slip event um in New Zealand before we can we often see sort of magnitude 3s 4s and 5s excited but or something as large as a 6. Okay so what did this mean for future seismicity this was um a major major question we were grappling with and I'll just be frank it was an incredibly stressful time trying to figure out what it meant and what sorts of things we should be communicating to the government and the public and I want to acknowledge I know that Roland and Emily Brodsky are in the room and they were very generous to talk to us and give us some advice about um what this might mean um as were a number of other people um uh international community um so we were really concerned this was the first time we'd ever seen simultaneous rupture of most of New Zealand slow slip regions and these do sort of wrap around and surround the lock zone of the seduction interface um a portion of the seduction interface was being loaded at a much higher rate than we've ever seen before both from the Caicota earthquake which probably you know increased the sheer stress on the lock zone more than the slow slip events um I've got some estimates of sheer stress change on the interface and these figures on the right um from Caicota earthquake and also the different slow slip and afterslip sources um the upshot of this is that locally um particularly in the lock zone near Wellington locally there were places where we estimated sheer stress increases from all of these events um of more than half a mega Pascal which is um pretty significant um one of the other big problems we were running into um you know unlike earthquake swarms there are really no existing models available to convert our slow slip event observations into future rates of earthquakes so this um sort of a new kind of issue to have to grapple with and we were really flying by the seat of our pants in a lot of ways um trying to figure out what was going on had a lot of concern um and what does it all mean and we were communicating early on within the week a couple of weeks after the earthquake when we realized this was happening um the government and their response was really to demand some immediate numbers on how the slow slip events change our earthquake forecasts so um largely implemented by Matt Gerstenberger, Geonet and GNS to operational earthquake forecasting particularly when we have large events like this and um the government wanted to know you know how does this influence these um the earthquake forecasts you've updated following the Caicota earthquake and this was a really important thing to grapple with because they're trying they were trying to make some decisions about fast tracking for example some major roading and infrastructure works particularly to try and increase the resilience of the Wellington region um in the event of a major um earthquake um and if any of you have been to Wellington if we you know one of the big issues here is if we have a major earthquake Wellington could pretty much be cut off from the rest of New Zealand by road so they were trying to fast track some things to try and prevent that from happening also thinking about response planning for Wellington messaging about public awareness particularly around um being you know prepared for for earthquakes and and particularly knowing what to do in the event of a potential tsunami with regards to evacuation and so forth so they were wanting some immediate numbers and so this put us in a pretty difficult situation um so a couple of weeks after the Caicota earthquake Matt Gerstenberger convened a rapidly convened an expert elicitation panel of a number of us in New Zealand um to discuss the probability of a magnitude 7.8 or larger in the next year um that was really focused on the central New Zealand regions so northern south island and southern north island um we took a variety of different um sort of things into account including some of the statistical earthquake forecasts so like the stuff and e-task models um that Morgan talked about and also e-pass which is um allows you to look at more sort of medium-term clustering over years to decades we also looked at um the seismicity rate increases that have occurred during past New Zealand slow slope events um on average we were seeing about a doubling of the seismicity rate increase during past slow slope events although there was a very wide range of this um we took into consideration some earthquake simulator um results Russell Robinson who retired from us um um in his earthquake simulator models for for the whole New Zealand region suggested about a two probability of a magnitude 7.8 following another magnitude 7.8 um the seismic data is is very sparse here compared to a place like Cascadia um although the the paleos seismic evidence suggests the southern north island locks and ruptured about 500 years ago and 800 years ago so it's possible that we could be quite late in that seismic cycle for the mega thrust here um and um so we kind of put all of these together and the expert elicitation panel converged on a five percent probability of a 7.8 or larger in the central New Zealand region um over that one year period from December 2016 um just put this into context this was approximately 10 times greater than the sort of peacetime probabilities um national size of the hazard model which is a time independent hazard model um about an order of magnitude greater and then almost double the probability from to the kai kota earthquake alone which was you know more of these um from the step in the the etas models from the the aftershock um the statistical models from the um so we obviously this was definitely by the seat of our pants we needed to do a lot better job with this um so we do we matt convened another workshop um and another expert panel including a number of international folks and Emily Brodsky was there so she can probably discuss in person with you guys some of the things that came out of that as well um but so this this this um expert elicitation workshop took place in November of 2017 it was actually almost exactly one year after the kai kota earthquake um and where we evaluated a suite of a broader range of models that were developed over that year following the earthquake um also and looked at estimating the probabilities um at workshop we used a structured expert elicitation um and we used the cook method some of you might be familiar with where we actually calibrated the experts so all of us had to take a test i think it was probably the first time in decades that some of us had to sit down and take a test um where we answered a number of questions and assigned uncertain uncertain uncertainties to those answers um and this was really intended to test the expert's ability to assess uncertainty and also their their knowledge and expertise in the area and so that the expert um uh probabilities um were actually calibrated by this okay um so the model building procedure for the second um expert elicitation workshop we i don't have time to talk about all of these but we incorporated a range of different approaches using both statistical uh forecasting approaches and also more physically based approaches and a number of other observations um this is a list of some of these here um in terms of the statistical forecasting models i'm really glad morgan talked a bit about uh etas and step we also use e pass which which allows you to look at more medium term um forecasting and we used a lot of hybrid approaches sort of merging a lot of these different models together um we also try to include the slow slip events in the forecast models as earthquakes um we couldn't do this by using the actual moment release in the slow slip events because those way over predicted the rate of earthquakes following the slow slip so we sort of scaled down the slow slip event magnitudes um based on um sort of earthquake sequence fast slow slip events um kind of using a mori and then estimated equivalent slow slip magnitude which was quite a bit lower than what was actually released in terms of moment release we also use some arguments based on rate and state friction to do that as well um i'm going to talk just in a little bit of detail um about a very simple physical model this one that that yoshi kineko published last year into your physical research letters just using an example approach to estimate the probability of large seduction earthquakes following slip events um where he basically developed an earthquake catalog spanning you know uh tens of thousands of earthquake cycles over millions of years and based these earthquakes on distributions of global seduction earthquake um stress drops and coupled that with a steady seismic loading and then he applied a stress kick um equivalent to what we estimate the stress kick was the locked megathrust due to the kykota earthquake and these slow slip events that occurred and he sort of inserted those stress kicks uh with recurrence intervals um comparable to the seismic estimate of past earthquakes on the kekarenu fault which was one of the major faults that ruptured in the earthquake so this is an extremely simple approach um but the other shot of this was that the annual probability of of um of large earthquakes occurring following the slow slip events in the kykota earthquake stress kick um really decays to the background level after about two years um and then the rate the ratio of the total stressing rate um at any given time over the stress drop of the earthquakes the average stress drop of earthquakes really controls the probability so if you have a sort of average stressing rate of about 74 kilopascals per year and an average stress drop of about two megapascals you're looking at about a 3.7 percent uh probability of a of a magnitude 7.8 or larger um so this is you can read more about this this is just a little detail on one of the approaches we used laura yeah we'll need to wrap up morale to the seems there's enough time for questions okay sorry um okay so the this is so what we did we have the experts assign their own probabilities um rather than waiting some of the specific models um because this was really to allow for the possibility that the models we consider do not capture what was going on um so this is just showing what the the various expert weights i think we had about 14 experts here and these were for two different scenarios magnitude 7.8 a year and a magnitude 7.8 in 10 years this is um just showing the the sort of summary of the questions the probabilities that were assigned to the questions that were elicited and we communicated this directly to the government and also to the public through the media and the geonet web page overall the revised forecast probabilities are almost price as high as those from the the site national seismic hazard model which is the sort of time independent model for the same region so just um thinking about some conclusions in the path forward i mean i think the biggest thing to take away from this is we really absolutely need to better understand the influence of slow slip events on earthquake probabilities i mean this remains an extremely large challenge and i don't think we understand this i think the community hasn't really properly grappled with this yet and we need to develop robust methods to do this and i think we need to develop a number of different methods both physically and statistically based you know i don't think we can be arrogant and say that you know i have exactly the perfect model to do this and it will give us the right answer we've got a or the probabilistic type of framework um and yeah i mean if we need to be able to also do this in a more of an operational earthquake forecasting sense so that this can be done whenever there are slow slip events so that we aren't stuck with making these ad-hot decisions um and then i just kind of want to wrap up with i don't know about other places but end users in new zealand are becoming much more sophisticated and their understanding of of what they can do with these earthquake forecasts and are demanding more and more on this and are relying more and more on these forecasts so i think that really is incredibly important we need to be able to provide this information quickly and in a much more robust manner than i feel like we were able to do following the sequence thanks thank you so we'll start off with questions for laura specifically and then we'll open it up for questions first speakers person so laura i was wondering if you could qualify sort of explain again the physics-based part of the estimates in probability increase if i understood correctly you said that based on a fully statistical approach with etas and step you would have estimated a factor two increase in probability due to the earthquake and the slow slip events whereas you at least for the first year felt it was it was 10 percent it was 10 times as large and then after the um assessment of all the expert you came up with a different number now and so so i and so i'm trying to put this in context with the argument that has been going on about say triggering along the north and atonian fault where even you have this succession then you know you could show that statistically perhaps the increase wasn't as much and so you're you seem to be using a sort of change in stressing rate argument and in that context how how much of sort of the regular stressing you know how much extra stressing was there due to the earthquake itself and then the slow slip and i think you said that i just missed it yes so one thing i think this will answer a lot of your question that maybe wasn't clear when i gave the talk the second elicitation panel that was convened we were actually eliciting probabilities from december 27th or november 2017 which was over a year after the earthquake to just um november 2018 whereas that first elicitation panel that we did was for the immediate period following that that year following the earthquake so we're looking at two different time periods so yours uh what you were mentioning about the probabilities going down um that was because we're actually eliciting a different time period which was longer after the event and um longer after both the kai kota earthquake and after the slow slip events had largely finished um and then the other thing so with the physics the so with the physics phase we weren't relying completely on the physics based model i mean i think we actually had really input from physics based models in um some of the different uh i mean what we were considering i think the experts based on their probabilities that they gave were really relying largely on a lot of the statistical models and i would i think there was really only two that i would consider to be in that sort of physics based um physics based realm of uh and can you you had a really long question what else did you ask me well okay you need to shut your questions Thorsten there's all this stuff happening right seems like hey ten times higher probability that's somehow decaying seems like a good guess right to be sort of on the safe side what i'm asking really is i guess the physics based models are not helping us much right now will they ever or where do you see an opportunity here to bring it back to some understanding of the physics or should we just stick with the statistics and that's why i was asking you know what are these stress changes and you know if you think about it and i think the kaneko paper you mentioned did this somehow but i just missed it well what are some of the numbers you get out of that where this would be a simple forward mechanical model of triggering yeah yeah so it did show a slide earlier on with the stress the stressing rate changes so an individual it's really hard to know how to deal with that because in some individual local spots um like particularly in wellington region we were looking at shear stress increases on the order of about half a mega pascal okay and um but if you sort of average the stress changes over different parts of the lockstone say the down dip part of the lockstone or the entire lockstone you're looking at something anywhere from 30 to 70 pascal and this is of course based on you know one or two models i mean ideally you'd want to incorporate a suite of different um geodetic models for the the co-size mix slip and the slow slip um yeah so i mean i think i think that we really need to i don't think we can rely on one or the other of statistics based models or physics based models i think the statistic based statistics based models are more mature um i think physics based models need to catch up i think we need to be considering both of these and um because this is really you know behavior that we haven't seen before and and it's hard to also know how to incorporate the slow slip events into statistics based models um they don't if you just put the slow slip magnitude in they don't fit after shock decay like um you know you would expect so you see those slow slip events mostly as markers of a change and the plate on the rather than individual contributors to the stress change is that correct no we were looking at them also as individual contributors to the stress change and so out of that say 0.5 mega pascal and the max how much is due to the slow slip events oh i would say probably uh 40 of it okay so it's about half okay yeah yeah so now i just want to we'll just kind of open things up so we can keep asking questions to both speakers and then also more broadly and roan right another question for laura laura you mentioned that the slow slip events produced a lot of less seismicity increases than earthquakes do you think that's because of their low stress drop the lack of dynamic triggering or other factors you think come in yeah i think i think those are probably the two main factors is you know we don't have the dynamic part and then the stress drop is lower the slip occurring and this is more broadly distributed over a region so i think um i think a lot of the kinds of things we need to keep in mind when developing a more physically based approach to to addressing this yeah question so one thing that both of you touched on that i was interested in that we haven't heard a lot about today is the interaction with the public and i was wondering based on both of these experiences whether you could just talk about what you might have learned in terms of communicating what's going on and if this you had a similar situation in the future what you might or might not do differently or what worked very well okay i'll go first um is this working you can just use this yeah okay um there are a lot of things i would do differently um i uh the cpec cpec learning that we ended up deciding on a range of probabilities because we had this range of models and we honestly just couldn't agree on which was right and i think that was very confusing to the public putting out a range of probabilities it's supposed to like saying a range of numbers of events saying a range of something it's already uncertain which is a probability is no was it epistemic and sturdy electric you're not going to say that to the public this is so it was really weird um so i think on our own we should have collapsed that among us as experts maybe it's a logic tree approach for voting or something and just given one probability as it happened since we gave this range the media was just like one percent they just took the upper number which is nice and round and you know more exciting and they went with that and that was pretty much what was disseminated all over the place um what was the range point zero three percent to one percent so and written badly and people you should also never say numbers less than one percent as a percent to the public either i know our social scientists are like what are you doing yeah so so there's definitely a lot in the communication and just of course um having a system up in place which we're trying to do now at the USGS we now actually have not for swarms but for regular main shocks we have an operationalized so within half an hour of magnitude five main shocks in the U.S. we're going to have something online we could already have something online and and that's because you know it's really important to get it out so quickly because the probabilities decay fast a lot of your probability gain is right after the event yes and with Sarah yeah Sarah McBride is our social scientist she she works in Malone she's working with the whole team helping us developing messaging for earthquake early warning and for operational quick forecasting making sure we get it right the next time so if we're not just you know putting all of our effort into the science then having a fall flat and that actually helping people make decisions and maybe maybe Laura wants to answer just a quick question i'm sorry i was just gonna ask this would apply in New Zealand as well but does the government or agencies have thresholds for where they have certain actions that they put into place so that that number that you decide one percent versus say two percent is or they're kind of like key points where when you make a decision then you say it's one percent that triggers something that it wouldn't be if it was point one percent i don't believe so i i know in the past with parkfield there was a more they had thought about this before beforehand they would have different levels of alerts and things actually i believe would happen but no right now the current situation is we get together to come up with a number we give it to call oh yes and yeah we're off the hook so but that is i can just comment on that that really is where we need to go and we're really in a we're in an operational phase now with aftershock forecasting we do have a nationwide product but we're in the exploratory phase in terms of the communication and the use of that really where we want though is to have those conversations operational conversations where there is you know mutual understanding of what are the decisions that need to be made what is what can science offer how can those be tied together so that there is a structure of decisions that are driven then by incremental supply and operational action sorry lar i'll let you respond as well oh no i mean i just more dense points about communicating probabilities and uncertainties and things i think is the biggest biggest issue in communicating to the public and they they don't have a have an intuitive understanding of this at all a lot of social scientists here in new zealand have kind of developed used phrases like unlikely for you know you know one to ten percent or something like that so using more um understandable phrases even rather than than um the actual numbers is another way to go um i think you show the numbers as well um we also had sarah mcbride in new zealand when this was going on so i mentioned sarah and she's very helpful with a lot of the the communication um around this and yeah and just with regards to the government having certain thresholds i don't think the new zealand government has that either and i think that um they're really kind of learning how to deal with some of these things and what the appropriate responses are and um we do have quite good communication i guess because new zealand's a small country um with sort of the the people at the higher levels um and and um so forth in the government so i guess we're kind of fortunate here in that way um but i think it's a big learning curve and part of the issue we all know other types of threats um levels have been just turned into colors and so we'll be frustrating if you have to take numbers and turn them into some other digestible form it's interesting to see where this where this goes also it's very unlikely we're ever going to get high probabilities with these statistical models so the probabilities are always going to be low to very low one might be bad so i had a comment sort of along these lines you showed the figure of the you know the decay in the in the risk after the bombay beach event and you made you were making one of the comments you were making was just how important to do this quickly because the example of one action was the um evacuating city hall thank you the city hall and that basically the the risk was really at zero but i mean i think but there's a broader impact here right that that even though by the time they decided to do this the risk was actually really low so in that sense we the seismologists have missed the opportunity to have them out more quickly there was a obviously there was a learning experience there for everybody involved not just the people not going in the building but all of the people hearing about this in terms of people actually thinking about you know what maybe the building i'm in might collapse in the next earthquake so i think i think there's that broader there's that broader impact piece as well but we should bear in mind of course we focus on the science and you know that's what you showed with that decay but there's then this sort of other piece that we sort of have to think about how we can take full advantage of the educational opportunities that kind of have a slightly longer time scale and your thoughts and i just wanted to comment on that because your point was that we were too late but i think a lot of people probably learned a lot by that experience yeah i i agree and they're also even when the probabilities are very low there are low cost actions people can do and just remind them to get those things done that they probably should have been doing anyway that can be good yeah laura in the situation you were describing the headquarters like um all right there's there's failure information that in the past a large earthquake not necessarily in kakura but in a general area has triggered a lot has then been followed by a devastating earthquake in wellington so you you have that kind of memory within the folks there can you comment about how that influenced and the decision making and the way that you communicated it to the folks in in wellington in particular into politicians yeah which which which um earthquake sequences are you referring to or in terms of the paleos seismic record well i'm just um i'm quoting tim at starne who was tell and i can't tell you the exact years but it was i that i wasn't in the late 30s so there had the earth the wellington earthquake in lower hut um the damaged lower hut too wasn't that preceded by an earthquake in south island hmm i'm not aware of that um but there has i mean there's definitely in the historical record and also the paleos seismic record a lot of evidence for clustering um in time i think we are in the middle of a cluster right now and that i you know certainly influenced you know our level of concern and also the government's level of concern um and in terms of the paleos seismic record you know i mentioned that there's good evidence for rupture of the that locked part of the megathrust about 500 years ago and 800 years ago and so we may be late in the seismic cycle there so um i think the the sort of the fact that we're probably in a period of clustering similar to what they were in the 1930s um for example in the napier earthquake and and some other earthquakes in the 40s happened um and then also in the late 1800s i think that's at the forefront of the government's mind and it's also behind a lot of the fast tracking some of the infrastructure um programs that they're doing right now i can't answer your question it's hard to do this remotely you know that's great i think we need to cut it off here for panel three but i want to just thank morgan and lar one more time