 Because the reason we're starting at an early hour is due to the various speakers in various places. We were trying to get speakers from various locations around the world and Deb did a fantastic job of coming up with a scheme that allows us to do it. But that's the reason for the relatively early start for one of them anyway. So we're actually, because of the time constraint, I'm not going to go around and have everybody introduce themselves because we're a little behind. But I just wanted to say we have quite a few folks who are sort of visiting and just wanted to say a couple of words about what this committee is. So this is the Committee on Seismology and Geodynamics. It's a committee within the National Academy structure. And really the role of the committee, the committee is sponsored by various federations, the USGS, NSF, NASA, and DOE. And the role of the committee, we have two meetings a year and it's really to kind of highlight what are the ongoing challenges and opportunities within our discipline and to try and bring out some of those issues. And we typically have a one-day workshop like this. It's very strange speaking from the middle of the room here. Anyway, we typically have a one-day workshop that's part of that meeting. And today's workshop is obviously on tectonic pre-curses. So again, I'm not going to do introductions because of the time constraint. But thank you to all of our speakers who've come in various distances. Welcome to those of you who've come to listen in. It'll be a free discussion, please. Don't hesitate when we have the questions and the discussion pieces. Everybody is equally welcome to ask questions and participate in the discussion, please don't hesitate to do that. Okay, so to get us started, we invited Emily Brodsky from UC Santa Cruz to come and give us something of an overview talk on this topic. So, over to Emily. Okay. Oh, thank you. My cup. No, it's in the room, but it's in the cup. I wonder if I'm going to get my cup. It's for the room. I see. Yeah. Oh, while she's mic'ing up. Yeah, that is a very critical piece of information. I'm sorry. Okay, well, so I've been given this title. I don't think I came up with this title myself. Of opportunities and challenges and studying precursory phenomenon. I added prior to earthquakes and eruptions just to be a little clear. And it's a kind of strange title. It's a kind of strange workshop, right? Oh, it's all. Oh, OK, it's Matt's fault. Why am I? Oh, I have to use the clicker. That's or not. There we go. There we go. There we go. OK, and I think it's a particularly strange title because. I think you all know the answer to this question, right? Right, everybody knows. Can we predict earthquakes? Come on, guys, all together. Generally speaking, I think currently today, state of the art technology, the answer is thank you, everybody awake even without their coffee. And nonetheless, it is worth talking about things that happened before earthquakes, because in fact, some things do happen before earthquakes. We've known for a long time that there are certain phenomena that do happen before some earthquakes, some of the time, most notably foreshocks. And foreshocks are not a particularly rare occurrence. I think people who don't study earthquakes or live in living don't really appreciate how common foreshocks are. Something like depending on how you do the statistics and Morgan will probably clean this up later on today. But something like 20 to 50 percent of made shocks have observable foreshocks. And the converse, though, is not true. Not just because you see an earthquake does not mean another big earthquake is coming. So a foreshock is generally identified in retrospect. You've seen a big earthquake and you go look beforehand did some smaller earthquakes happen beforehand. And the statistic there is something like 20 to 50 percent of earthquakes have some smaller earthquake beforehand. But if you take an earthquake and try to ask what's the likelihood that it is a foreshock that is some bigger earthquakes coming later, the statistics work out to be something like five percent. And so this makes foreshocks rather challenging to use in a predictive sense. You would have to accept an awful lot of false alarms. So this has been the state of knowledge for quite some time. What's changed? Why are you bothering having this workshop now? I hear it's all Matt's fault, but I think that actually was being motivated by some pretty genuine observations. Most notably, Tohoku. The Tohoku earthquake in 2011, this magnitude nine earthquake, the giant tsunami, I think probably everybody's familiar with it. It really got people's attention for a number of reasons, including it's fairly spectacular foreshock sequence that we will hear much more about later today. But its most vigorous period was a few weeks before the main shock and it migrated spatially. And it also and it was the kind of a bet that with 2020 hindsight, you might have thought that maybe we should have done something about it with 2020 hindsight. So it really begs the question, what makes foreshocks in the first place? Physically speaking, and there are kind of two dominant ways of thinking about foreshocks. One is that foreshocks are some sort of cascade phenomenon. That is that you have some earthquake represented by the Red Circle, which makes a bunch of aftershocks of various sizes. The sizes are drawn at random from a magnitude distribution that is much more likely to be small than big, but has a non-zero probability of being big. And each of those aftershocks has more aftershocks, which has more aftershocks, which has more aftershocks. And so there is a small probability of any one of these aftershocks being bigger than the original main shock. So this sort of cast trigger and cascade would suggest that the foreshocks are in some sense actually causal and that there's nothing physically different about foreshocks that compared to any other earthquakes, it's just a trigger and cascade. That's one way of thinking about foreshocks and it's one physical possibility. And I would say actually prior to Tahoku, it was probably dominant in the field. Tahoku really made people start thinking about foreshocks a little differently and resurrecting some older ideas about foreshocks because in addition to having the actual foreshock sequence, there was some evidence of a slow slip event that went with them. And so here's the other end member of how people think about foreshocks that maybe foreshocks are being triggered by some slow gradual creep on the fall that then triggers these earthquakes. And Ito et al. had some seafloor instrumentation out temporarily in a kind of campaign style mode. And it's a pretty tenuous data set. I think everybody would agree with that, including Yoshi Ito. But it can be interpreted to show that there was a slow slip event sort of in the foreshock area and that would be consistent with some sort of migratory foreshock behavior that ultimately triggered the made shock. The reason, by the way, this data I call it tenuous is because although the instruments were very close to where the eventual main shock was because they were temporary, there's not much of a baseline. And so it's kind of hard to know whether or not you have a genuine anomaly or not. We'll come back to that thought. All right, so that was to Hoku. And so that was pretty interesting. We had these migratory, so the slow slip and the foreshock to the foreshock is actually migrated in space, which is showed here just in this kind of latitude as a function of time until it hit the main shock. And then what happened three years later was something else was the Ikeke earthquake in northern Chile. And the Ikeke earthquake also had a very, very vigorous foreshock sequence in March of 2014 that also migrated. And the thing is there's a lot scientifically interesting about Ikeke, which many people will be talking about today. But I also think there's something kind of scientifically societally interesting about Ikeke, which is because we had all been looking at to Hoku so much. I think a lot of people were much more aware of this foreshock sequence in real time than we might otherwise have been. And seismologists around the world, us at Santa Cruz and we were by no means the only one, were kind of watching this thing in real time and saying, well, here's this big lock zone at which we think there might be an earthquake and we and look at it migrating, when's it going to happen? And my understanding is in Chile, people were sufficiently concerned that they actually went out and spoke to the press and said, you know, we don't predict earthquakes, but this is an unusual event. And now we should always be prepared for big earthquakes and now's a pretty good time to be prepared. And that sort of message was set out in real time prior to the big earthquake on April 1st, 2014. Now that's a very different place. And this turned out to be a magnitude 8.1 earthquake. So I don't remember, this is the key case sequence cannot be well fit by that cascade model using usual earthquake parameters. I don't remember if the 6.7 is aftershock deficient, but overall the foreshock sequence is more vigorous than you would have predicted actually from the earthquakes earlier in the sequence. Okay, well, that's interesting. And the key case, of course, is even more interesting than that, which is that like to Hoku, there is also evidence of a slow slip component to this foreshock sequence. Here the evidence comes from onshore instrumentation rather than seafloor instrumentation. And the onshore instrumentation has the advantage of being continuous, so the baseline is well defined, but has the disadvantage of being further away from the actual source of slip, right, because it's onshore, not under the water. And so I think Sergio Ruiz is going to be talking about this data shortly. So that was really quite exciting, that there's another example to now before really big earthquakes, magnitude 8 plus earthquakes, some sort of genetic transient that's going with the foreshocks. And it really begs the need that in both these cases, what made it interesting was the geodesy, but in both cases there was something kind of wrong with the geodesy. And what we really needed in both cases was sort of the combination. We needed the instruments to be on the seafloor like they were to Hoku, but to be operated continuously so that you could actually be sure of the anomaly like they were in a key case. And so there's an instrumental need and a reason to do it now, given these magnitude 8 earthquakes, to try to capture continuous sea floor geodesy. So how do you go about that? Well, I mean those earthquakes already happened, so that's kind of water under the bridge. It does seem like trying to build out such instrumentation on any one foot foot segment that you might expect an earthquake on would probably be unwise. That would probably put us back into a park field type scenario where we're gambling on one segment for a very long time. I think most investors prefer a portfolio approach. And so if you take a look at a map of the world of various areas where there have been identified seismic gaps on the megatrust, on the seduction zone, and those megatrust, those prior identified seismic gaps have not ruptured in the last 50 years, you end up with this map of gray areas. And if you allow yourself to have 80, it's a pretty arbitrary number, but 80 white dots of which are meant to represent sea floor instrumentation, you could actually do a reasonable job covering all of them. So it's not a totally stupid idea to think about a portfolio of fault segments that are likely to go. If you did this, you would be, the statistics are such that you are very likely to get magnitude eight earthquakes more than one within a 10 to 20 year interval. And actually answer the question whether or not those sorts of precursors that we saw for PK and Tahoko are generally seen. All right, well, the strategy I'm suggesting obviously is long-term continuous before, during and after earthquake instrumentation. There's more than one reason to do such instrumentation. And another reason has to do with probably the other, in my opinion, really exciting seismological discovery of the 21st century, and that is episodic tremor and slip. Episodic tremor and slip was, is this gradual slow motion earthquakes on the plate boundary that in some cases happened fairly regularly, quasi periodically, every 14 months in Cascadia. And again, are leaning on the geodetic revolution to be able to be detected. And the importance of episodic tremor and slip in this story is that the existence of such a slow motion earthquake tells us that we have a much richer suite of behavior over the earthquake cycle than previously thought. And it's hard not to extrapolate that these slow slip events that we're seeing before earthquakes might have something to do with the slow slip events that we see in other places in between earthquakes, and really what we need to be doing is stitching together the entire earthquake cycle over its full bandwidth. All right, clearly I'm excited about that idea, but let's not get carried away here. There are certainly migratory earthquake sequences that have not culminated in magnitude eight earthquakes. If I put a volcanologist hat on it for a moment here, I might call this unrest, what volcanologists would call it, that you have earthquakes moving and that look, you know, not that different from the other ones here. This is a 1997 example that did not immediately end up in a magnitude eight earthquake. So there is a physical question there about what's different and why do some of them do this and some of them don't. And it's also worth pointing out that not every earthquake has an observable foreshock. I have purposely used a somewhat historic figure for this point, as difficult to read as it is. This is a figure of the Northridge earthquake, where this is a spectrogram actually, and the x-axis is time. And when, I think basically all you need to know is that when it's blacked out here, that means there's lots of seismic energy, so lots of earthquakes. And the point of this figure is that times before the Northridge there's no black, there's no foreshocks. This figure comes from the last time the National Academies endeavored to discuss earthquake prediction, which was 1996, which as I'm reading the report appears to be a fairly sober reading. And so it's worth realizing that at least we're not going to be able to see foreshocks probably before every single earthquake, and that that's an important open physical question. What distinguishes, are there different flavors of earthquakes? Are there those that have foreshocks and those that don't? And why is that? But of course, a lot has happened since 1996. Quite a lot has happened since 1996, which is why we're bothering to have this discussion again. In 1996, we didn't know about episodic tremors, but in 1996, we had it had two magnitude eight earthquakes with observable precursors. We had it had the space geodesy revolution. And more, we also had it had the machine learning revolution. And what computational methods could do for us today is allow us to dive deeper into seismograms and actually we're finding a lot more foreshocks than we used to. This is an example drawn just from a couple weeks ago of Zach Ross's paper in Science where he improved the Southern California catalog through a template matching approach. And here was the previous Southern California catalog for a particular set of events in the Brawley seismic zone on the southern end of the San Andreas, which had a 5.3 and a 5.4. And here is the improved catalog. And what's most notable of this example of the improved catalog is how many more foreshocks you see. And this is, and he gets there by driving the completeness of the catalog down to magnitude 0.3. In other words, measuring really, really, really little earthquakes. And this is an important point, really, really little earthquakes are an important part of the story of how to study precursors really, really big earthquakes, that they are the glue that stitch things together. They are where all our statistics come from because little earthquakes are so much more abundant and we need statistics in order to look at rate changes. So instrumentally, observationally, we really need to measure small earthquakes in addition to the geodesy and we need to really invest in our regional networks to do that. And we also need to get offshore and be able to measure small earthquakes offshore. Okay, those are all some strategies on studying precursors, kind of passively. There's a radically different strategy you could take on studying earthquake precursors. And it's kind of motivated by the fact I cheated here. Anybody catch my cheat? This is not a normal earthquake sequence. Anybody read that paper? No? Okay. My cheat here is that this is actually a sequence that's thought to be caused by injection in a geothermal plant in the Broly seismic zone. It's an example of reduced seismic, human-induced seismicity. And so that really does motivate us to think about, maybe we could study foreshocks by pretending we're like real physicists that can do controlled experiments. Yes. Yeah. Yeah, not surprising to me and good to hear. Thanks for pointing that out. There's a lot of interesting, I'm going to try not to go to the tangent because I want to go to volcanoes. Foreshocks in Southern California are in some sense more abundant than they should be based on the aftershocks. So, okay, an alternative strategy would be to actually try to capitalize on the fact that we see these sequences when water is injected and to try to do active controlled experiments. Now there are reasons to believe that the earthquake sequences that humans make might be different than those that are forced by natural plates. Nonetheless, we do see foreshocks. We do see swarm behavior. I mean, you know, you have the advantage of being able to do a controlled experiment where you know the forcing. And there is a, what I thought was a fairly beautiful paper by Bada Charaya and the ASCA, also inside a couple of weeks ago where they analyze some data previously collected by Ibu Gliami and coworkers where they did an active injection experiment and somewhat surprisingly what Bada Charaya and the ASCA showed is the mechanism of triggering those foreshocks and the ultimate earthquake sequence is not just the poor fluid getting on the fault and creating immediate failure sort of over by just decreasing the effect of stress, sort of the way that we normally think of do seismicity works but actually created the ASICE mixed slip pulse. And so there's an extra process in there. And when you do these active experiments, I think that's what they give you is this ability to actually pull apart extra processes that you might not otherwise know about and that are pretty hard to get at through a path of observation. Okay, earthquakes recap. Okay, so what I've said thus far is foreshocks exist in our common. We knew that. There are some open questions. They're physical origin, cascade versus slow slip, distinguishing features of unrest versus foreshock, distinguishing features of main shocks that are preceded by foreshocks. I don't have a good word for that main shock preceded by foreshock versus isolated main shocks. Morgan, do you have a good word for that? You don't think Northridge had a foreshock? Okay, but why it's an open question. But those, so those questions are not new questions, but there's observations and there's data that's new and that we're in a very different place in approaching those questions than we were 10 years ago. We have these improved seismic records that are showing more common foreshocks. Maybe 75% as Roland pointed out. 70 or 75. Okay. And I want to emphasize that a huge part piece of this is this space-based geonetic revolution that shows that these slow slips accompanies foreshocks in some places and also episodically on faults. And we have at least two good examples of magnitude 8 plus earthquakes that both slip and foreshocks. That's radical. And so the strategies I'm suggesting for getting further along on earthquake precursors, because I don't think I need to argue that that's an important problem. We're all on board with that, right? Is that we could use long-term continuous geonesy and combine with high-resolution seismic network, particularly in subduction zones where we'd see these sorts of phenomenon before and or an active experiment? Here's a different question. And we currently predict eruptions. Mostly, yes. You guys are much more optimistic on this one, right? You're going to give me a yes question, Mark? Did I capture it? All right. I think most people think, most people in the field at least think that we do a better job predicting eruptions than we do earthquakes. It's not considered such a bloody crackpotty kind of conversation. Well, is that really true? I find this a totally fascinating study. This is a paper by Winston et al. in the Journal of Applied Volcanology, which is basically a metadata study that asks how often do we get it right? So this is taking the point of view of an eruption that happens. How often do we have either a timely prediction or nearly timely or maybe a little bit early? Those are like the green or yellow or totally miss it or too late a prediction. This is kind of a sobering figure. Apparently, even though we all think we can predict eruptions, we're messing it up almost 80% of the time. Why is that? Well, I think the disaggregation of this data is really what tells an interesting tale. What Winston and co-workers did was then break up these alerts by what instrumentation is available at the volcano. And not surprising, and they have actual definitions in the paper of what these different kinds of levels of monitoring mean. But not surprisingly, as monitoring gets better, good for them is having six seismometers within 20 kilometers of the bed or six continuous GPS or continuous gas monitoring. That's their definition of good. Once you get to the good level, you're doing much better than when you have no instruments on the ground. Well, that's perhaps not surprising, but it's telling us that we're not having good instruments on the ground on an awful lot of volcanoes and therefore the idea that we are successfully predicting is somewhat erodious. Even for the ones that have the good or the research grade notice, by the way, it's only about 50%. So what's up with that? Is that that we actually should have even better instruments and then we would do even better or are there physical distinctions, just like there might be some physical distinction at earthquakes that do or don't have precursory behavior. Now, the idea that there are different kinds of volcanoes, I think is much more generally accepted than the idea there are different kinds of earthquakes. And so it's clear from this analysis that there is an observational need if we want to talk seriously about eruption prediction, which is to have good by this definition or better networks. But we should probably go further than that. We should probably also ask the question under what physical conditions are volcanoes predictable? And you get a little bit of a window into what's going on by looking at eruptions that do have, at least in hindsight, some precursory behavior that lasts some duration prior to the eruption. We'll call that run-up time is the duration of the precursory unrest. And this is a study I did with Luigi Passarelli and we defined unrest as run-up time as simply the time that somebody near the volcano said that something unusual is happening, usually an increase in seismicity level. That was the best we could do because of the lack of generally quantifiable databases of these sorts of measurements and we'll return to that issue. And we also looked at this run-up time then as a function of the time since the last eruption and also the composition of the volcano result through day-side. And it's pretty sloppy, but there are some trends here. That the more mafic systems are having shorter, in general, precursory times and they're erupting also more frequently. This is a dataset that's dominated by the open volcano systems and those have very short precursors and they are likely to be missed. In fact, a more comprehensive study of the Alaska Volcano Observatory found that they got only 9% of their open systems right. They were able to successfully predict whereas predictions in general are much more successful in the closed systems which are usually correlated with the high silica systems and which have longer repose times. So there are physical conditions that are in fact distinct and worth studying and disaggregating both from the point of view of understanding the mechanisms of eruption and for understanding when we should be answering the question yes we can predict the eruption but when we should be answering no, you're on your own. Okay, what about the other question that I keep asking which is how often does unrest lead to eruptions? Again, this is a more generally accepted question in volcanology but this is totally analogous to the earthquake question how often do you get swarms that do not culminate in a main shock? And the answer to this is just unsatisfactory. I was digging around for something and Sarah Ogborn was kind enough to send me her intrep figure of trying to compile unrest by various definitions of unrest and how often that led to an eruption and so she's got a bunch of different definitions of unrest listed here from various studies. Again, it's a metadata study and I think one of the things that jumps out at you from this is that there is no uniform definition of unrest on volcanoes, they're really pretty complicated systems there is one thing to look at not just looking at foretocks, there's earthquakes there's geodyssey, there's various kinds of earthquakes there are gas measurements, there's a lot of different stuff that you might choose to look at and you'll end up with different definitions of unrest and there is no consensus on what is the best definition of unrest that is most likely to yield results and I think a lot of that has to do with a data management problem and a data-facing problem but I know that's a fairly unglamorous thing to say but the only way to create such a figure and to create such a quantitative assessment of our predictive capability right now is to sit down and read a lot of bulletin reports and a lot of prose of papers and try to stitch together a story and what is needed is a much more comprehensive data-basic and data management capability effort fortunately I'm almost done so what I would suggest to make progress on eruption prediction is not that different than what I would suggest to make progress on earthquake precursors which is that we need uniform instrumentation uniform data management publicly served data and good level of network or better over a suite of volcanoes and again a portfolio approach is highly desirable because not every volcano behaves the same way and we really need to understand comparative volcanology I think can I say it? Michael's going to get angry at me I think comparative volcanology is in its infancy and that because we haven't had these sorts of resources and we're making progress I'm citing papers that I've happened in just the last few years and we're getting somewhere but we're just beginning to because we're only beginning to get uniform data sets so to recap for the volcanoes we have open questions that are analogous to the earthquake open questions we again have some things that are very new that are happening in the volcanology field these higher quality multi-parameter networks and it's sufficient examples to do comparative volcanology so we can begin to build a suite of 100 volcanic eruptions to compare it's not the million earthquakes in Zach Ross's database but it's something at least that you could start to do statistics and that what we really need is long-term good networks data management and dissemination so that is what I've suggested with an emphasis of passive monitoring on a portfolio of sites in both cases and I will stop there and are there any questions? you can't talk from the point of view so going back to earthquakes you mentioned the number of observations that were not possible before that could be useful now you didn't mention repeating micro-earthquakes you know I should have this is Nea Kota I guess repeating micro-earthquakes we have known about for quite some time I don't think we appreciated how prevalent they were and they are in some way in my mind if I can make a little bit of an excuse is they are part of what we get out of really investing in studying smaller earthquakes and investing in regional networks but yes they're important Cindy just in the case you made in the table suggesting that we weren't very good at predicting volcanic eruptions and saying that we had such a diverse heterogeneous catalog and information and not completed in any way and having to in a way said that we should go back and re-evaluate using machine learning and re-extracting a lot of the earlier earthquakes as well and systematically re-analyze look at and also repeating earthquakes using the better tools that we have available to us now including legacy data perhaps in cases ah that's where you got one no it isn't I think legacy data is important for this problem because because you need to get over a whole earthquake cycle and legacy data is a critical piece of that it wasn't specific to legacy it was more just to comment about potential for extracting out more information by re-analyzing and building a systematic cap absolutely and I think the difference in just quantity of data available in the earthquake problem in terms of bites and the volcano problem and not quantity of data that exists but quantity of data available is quite stark when you start asking these questions you know I was putting up bar graphs with 10 of beds for the eruption problem and a million of beds for the earthquake problem and I think that's where we should be putting that effort okay yeah so you mentioned monitoring 80 volcanoes or sorry a subduction zone with seafloor joseph in volcanoes as well in terms of the seafloor geodesy aspect what is that in care what is required pressure gauges on the seafloor like what do you need just a vertical measurement do you need vertical and horizontal what's required okay I'm going to defer Jeff is practically jumping out of his seat there I was going to ask the same question what pressure level okay I am going to defer certainly I'm going to ask what is the instrumentation issue to people who do instrumentation what I can tell you a little bit about is what needs to be recorded in order to actually see what is important and if you guys go figure out what the right instrumentation to make that happen is I think we need sub-centimeter resolution and perhaps people later speakers today will know that sub-centimeter is necessary um the we it definitely needs to be continuous because otherwise you have this baseline problem it can't be campaign mode so it either needs to be tabled or you need a data wheel solution like a wave glider or something and it needs to be under the water and those are the three big things and whatever technology you choose to do that meets those specs is fine with me I'm just interested in the answer that means you need a capable person we'll talk about that in the next panel let's move to the next panel I'm going to defer to you guys to give me what the answer is there we're also out of time so that's another reason to thank you Emily let's thank Emily one more time