 With that, we'll start our workshop. Our first speaker is Tim Palmer. Tim is a professor at the University of Oxford and a visiting fellow at ECMWF has spent many, many years thinking about this problem. So Tim doesn't need an introduction to this group. Tim, when you're ready, you can share the screen. Thanks. Okay, thank you very much. And thank you for the invitation to talk. Did I? Let's see. Okay, can you see my, can you hear me? Yeah, I can. Can you see my screen? Yes, Tim, yeah. Okay, so I'll explain this character on the front in a few minutes, but I guess the theme of my talk is really how well could we do in this area of seasonal and sub-seasonal prediction? What's our aspiration? And, oops, okay, good. Let me just explain this slide. Let me just start by noting that we've had a pretty strange year weather-wise. In fact, not only year summer, the extraordinary temperatures that were reached in British Columbia. And I mean, nobody would have expected 49.5 degrees Celsius in British Columbia, but that's what was observed. Then we had the devastating floods in Germany and Belgium and parts of Europe. And then shortly after that, I'm sure we all saw the pictures of people being flooded on the subway trains in Henan province in China. And I don't know about the US, but certainly in the UK, I think there's been a kind of, even amongst the media that's quite climate skeptic, I would say there's been an acceptance that we're seeing something extraordinary here, something that just goes way beyond the normal. And there's a kind of a growing acceptance that this could well be a manifestation of climate change, even people that have been previously reluctant to think about that. And so then the question arises, well, okay, what do we do about it? I mean, there are two issues, one we have to try to cut emissions to stop these things getting worse. But then there's a realization that we have to also try to adapt to the new normal, try to make society more resilient to changing extremes of climate. And this was a report from the Commission on Adaptation a couple of years ago where they highlighted a number of options for climate adaptation. You can see going down the list, there's the fourth one is protecting mangroves. This could be what's often called nature-based solutions, trying to use nature to, in this case, help reduce the risk of dangerous storm surges by getting the mangroves to damp some of those storm surges. The second one is the conventional kind of technology engineering approach. Can we build new infrastructure to make society more resilient? But the top ones, the reason for showing this is that they highlighted strengthening early warning systems as one of the really key issues to make society more resilient. So strengthening early warning systems should be seen these days as part of the more general climate adaptation program. And not only that, the report tried to estimate a benefit cost ratio in these different areas. And you'll see that strengthening early warning systems has a benefit to cost ratio exceeding 10 to one. It's the only one of those five that has such a high benefit cost ratio. So I'm starting this, making the point that what we're doing in seasonal to sub-seasonal, we may think of it as somewhat detached from the climate problem, but I think it's going to increasingly play a crucial role in getting society better prepared for extremes of climate, whether it's droughts or floods or extreme temperatures or whatever. So I just wanted to put the work that we do in this area into that context. And in fact, there've been a number of really, I think important and exciting developments literally in the last year or two on how society is actually making use of our forecasts. It's always been frustrating, I'm sure, to everybody to see on the news, if you don't experience it directly, but see, at least see on the news that a major event hits an area, it could be a tropical cyclone or it could be a continuing drought. And it's only really when things have got really bad or after the event has hit that the emergency services start acting. And what's frustrating to us is why didn't they act sooner and why didn't they make use of the forecasts? And this is starting to happen now. The, I think a lot of the pioneering work has been done by the Red Cross, Red Crescent Society. In a scheme that they call forecast-based finance using the forecasts on different timescales from medium range up to seasonal to provide aid in the way of financing to local regions at risk of severe and intense weather. And now this UN Office on Coordinating Humanitarian Action is making this a major new initiative for their disaster preparedness and humanitarian work. So suddenly I think our science has kind of gone from let's say of academic interest or something that we monitor in workshops and things to something that really is center stage in this whole climate resilience, climate adaptation program. And so it raises the question, are we able to deliver the goods? And how well are we doing? And I have to say, I think, those of us who studied and indeed worked on seasonal forecasting for many years realize it's still a bit of a mixed bag. I've taken the last two northern winters as examples. We had, I think would say remarkable success in winter 2019-20 in the sense that the models around the world pretty much agreed on seasonal forecasts for DJF 2020, both in the tropics and in much of the extra tropics. And as far as one can tell, this seemed to be linked to a very strong Indian Ocean Dipole event that developed. There wasn't any Enso event. And in some ways it wasn't the kind of standard paradigm of a strong Enso where all the models agree on the Enso response. There was no Enso, but there seemed to be an IOD. And I think to some extent, this has taken us a bit by surprise. So it's a good result, but it wasn't as if we had this paradigm where IOD was known to be a very strong and predictable driver of the tropics and extra tropics. So we had good success there, although I would argue for reasons that I don't think we still yet fully understand. Conversely, this winter, I would say things were much worse. There was an El Niño, or at least a La Niña event in reality, but the models tended, as they went into the winter, tended to significantly deepen this or increase this La Niña event. I put three of the main operational models below and you can see the one on the left is really very poor indeed, but they all have pretty much the same tendency of trying to over develop the La Niña and then the corresponding teleconnections were poor. And I would say these were not very good forecasts. And again, this kind of goes slightly contrary to the normal paradigm, which is, well, at least we can get El Niño and La Niña right, and we may still struggle with the teleconnections, but at least we can get El Niño and La Niña right. And this shows that this is not quite as simple as that. So I think from a theoretical point of view, we still have a lot to understand from a practical point of view, as I say, it's a bit of a mixed bag. We have some very good examples, but we also have some not so good examples. But sort of irrespective of that, I think it's probably fair to say that all of the actual extremes of events that I mentioned, the extreme heat on the West Coast, the flooding in Europe and in China, the actual magnitudes of these events are way outside the range of what current seasonal forecasts or indeed sub-seasonal or for that matter, probably medium range as well. We're talking about events which models really struggle in a big way to predict. And that's of course a major problem if you want to try to attribute these events to climate change, because the whole attribution philosophy is based on looking at the frequency of these events in runs with current CO2 and with reduced CO2. And if neither ensemble can simulate these events, then all you get are the ratio of two numbers, which are both zero. So it becomes an indeterminate calculation. Well, I mean, historically, you know, when we've tried to ask the question, how well could we do in principle, the answer has been to go back to what's called the perfect model assumption. We just assume our models are, they don't have any biases or systematic errors. And we maybe kind of introduce small initial perturbations which correspond to what we think are plausible estimates of uncertainty in the initial conditions. And then just look to see how far the models diverge from each other. But in recent years, I would say this itself, this sort of assumption, perfect model assumption has been called into question. And this article in Science Magazine about a year ago really cast out on this approach. And in particular, the fact that current models may significantly be underestimating potential predictability. This actually goes to a problem that was highlighted by the Met Office some years ago. Some years ago, perhaps the paper, E Detal is the most known one and focusing on the North Atlantic Oscillation. And the result that they found was that when they, this is over a number of years in the 19, well, the late 20th century and into the 21st century, they found that when they correlated their ensemble means against observations, they got quite high levels of skill. But when they correlated the ensemble mean against a typical ensemble member, the correlation was actually much lower. So the implication is if you treated the, if you were to think about a perfect model scenario where you just take one member of the ensemble and treat that as a potential realization of truth, you would actually underestimate, very significantly according to this result, the actual correlation of the ensemble mean against the observation, which was like three, whatever it is, over three times higher. And the, yeah, the ratio of those two numbers is a thing called the RPC ratio of predictable components. Well, we've done a certain amount of work on this in Oxford. One thing I'm not gonna talk about, but I just sort of say in passing that we believe that this property, if you like, of the models and the atmosphere has some decadal variability. And that this high value of the RPC, in other words, the underestimation of predictability from the perfect model potential predictability can actually vary from one multi-decadal period to another. And Anandche Visime has done, I think important work on this showing that in the mid-century, for example, that there wasn't this underestimation. But again, going back to the early 20th century, there was again. So what's going on here? Because this is, I would say this is quite important. As I say, if we're wanting to try to estimate how well we can do in principle. Because if our perfect model assumptions are just misleading us, we need to understand why. And the hypothesis that we've been working on in Oxford is that it's related to, if you like, systematic errors in the nonlinear structure of these models, which would manifest itself even when you're doing a perfect model kind of predictability experiment. And the idea is that in the real world, we have circulation patterns that have a kind of regime structure. They have a certain persistent structure due to nonlinear dynamical effects. But these are poorly, and part of these nonlinear, well, we'll come onto this, but part of these nonlinear structures are associated with scale interactions between high and low frequencies and small scales and large scales. But the models underestimate that and they end up having these rather shallow or broader regime structures. So you can think, if you like, of the real world rattling around in a rather deep potential well, where the ensemble members are rattling around in this much broader, shallower potential well. And as a result of that, the ensemble members have much too much spread compared to what they should do if they were rattling around in a deeper potential well. Although this work was focused very much on mid-latitude circulation regimes, I think there's a kind of generic issue here which I would sort of just like to point out that, you know, this is a kind of deficiency in nonlinear structures in climate models. And I just fairly randomly sort of showing you a paper pointing out that there's also ENSO regime structure. If you run ENSO models or look at ENSO variability over long time periods, you can see regime structures there as well. I mean, one simple example is the fact that we have more of these Modoki El Ninos in recent years than we had in the past. And this may be a different type of ENSO regime. Christian Strowman who's in my group in Oxford and I wrote a paper in the QJ trying to analyze this problem from a kind of Markov chain point of view. So in this case, NAO, we're thinking of the North Atlantic Oscillation in its two phases as two regimes and in this Markov chain model, we just, we model the variation in, well, we model the atmosphere as a kind of random jump between these two regimes where there's an intrinsic probability from one day to the next of staying in a regime that could be alpha or NAO minus or beta for NAO plus and then a one minus that probability for making a transition. And then we would model different years by different values of alpha and beta. So there'd be kind of inter-annual variability in alpha and beta. And then we would model the fact that the climate models are deficient in simulating the persistence and the persistence of the regimes or the depth of that potential well with a parameter which would systematically weaken the persistence probabilities alpha and beta. And with that Markov model and a particular, we have this parameter K, which is this deficiency in the alpha and beta. And with a particular value of K, we can almost identically simulate these correlations that were seen in the Met Office seasonal forecast model. So I think it, it, it's, well, I think at least it's a plausible explanation for this problem. So then the question is, how do you improve the regime structures? How do you improve these non-linear structures in our climate models? And let me just show you as a really interesting result and a sort of counter-intuitive result from the Lorentz 63 model. Sorry, so the top, the top slide, this is from Josh Dorrington as a student of mine in Oxford. Or I have to say this result was first published by Frank Kwasniak from Exeter. So this is just a replication of what Kwasniak had done previously. So if you take the Lorentz 63 time series, it oscillates chaotically and unpredictably between the two regimes. If you add noise to the system, you might think, you know, it would kind of destroy the regime. It would just kind of smear out the regime structure and you just get a kind of a mushy mess. But actually for a range of noise values, it does completely the opposite. It, it, it, it leads to much more persistent regimes, much stronger regime structure. It's very exciting. I mean, it's a, it's an interesting non-linear effect of noise. It's one of these examples where your intuition, if it was based on linear thinking, would be completely wrong. That's your 10 minute warning. I don't know if you could hear. Okay. All right, then I need to rush. No, you don't need to rush. You just, you have 10 minutes list. Okay. Including questions. Okay. Oh, including questions. The 10 minutes including questions. Yeah. So if you could, yeah. Maybe five minutes and then five minutes questions. Okay. All right. So we, I did some work with Andrew Dawson, who's now at ECMWF in the so-called Athena project, looking at this regime. We did find evidence that this is going from T511, that there's the second to the third points with adding stochastic parameterization, which did actually help regimes. The biggest effect though was that resolution was the biggest thing that would improve regime structure. And since that day, there've been a couple of papers, since that time, there've been a couple of papers by a number of us in Oxford and in Italy and elsewhere, actually confirming that regime structures are improved with resolution. The evidence with the stochastic noise is actually, I would say much less clear at the moment. And it's not quite clear to me why we're not seeing the big impact we saw in Lawrence 63, but if I've only got five minutes, I don't have time to talk about that. Okay. So what sort of level of resolution should we go to? And I think we need to think big here, because once we get to one kilometer resolution, then we can completely get rid of parameterizations of deep convection, orographic gravity waves and ocean eddies. And we're beginning to do this. This is the Max Planck icon model for the ocean at one kilometer across the North Atlantic. And we're kind of seeing remarkable eddy structures that you just wouldn't see at the typical resolutions we have now. I think I'm gonna miss this point, but one thing we don't need, however, is high precision. And we've been doing a lot of work on 32 and now even 16-bit precision. I'm not gonna talk about this paper, but it'll appear in QJ, hopefully, QJ, Journal of Climate, Journal of Climate shortly, doing El Nino experiments at 16-bit precision with the speedy model. And the results are almost identical to the 64-bit version. And certainly in Europe, the EU is now investing a lot of money in developing a one kilometer climate model coupled to societal impact models and through what they call a digital twin of planet Earth. One thing that we're still trying to lobby for is that we need computing, super computing that is commensurate with that sort of ambition. And this is actually the latest CERN Courier, the High Energy Physics Center, which has an opinion piece from myself and Bjorn Stevens making the case for CERN for Climate Change, at least a dedicated exascale computing center. These projects will not succeed without the support of the community. This is why I put this slide because it needs your help to take this forward. All right, so I'm gonna stop and then if there are any comments or questions, I'll be happy. But we have made great strides in operational seasonal sub-seasonal prediction in the last 20 plus years. But we're still at the mercy of model inaccuracies, biases, these sorts of problems with the non-linear structure of regimes and so on that I mentioned. We can't wait for another 20, 25 years to sort this out. We need reliable predictions to help society become more resilient to changes in climate. This is becoming, I think, an urgent problem and just sort of saying, oh well, eventually it'll all get sorted out. I don't think it's good enough. Some modest investment in dedicated exascale computing for climate and seasonal forecasting could really be revolutionary. And in terms of what we spend on space-based observations or on other things or even CERN, if you like, this is not really a very large amount of money. But it needs the community support to make this happen because if we don't speak with one voice, then it won't happen. So I feel passionately that seasonal forecasting is really crucially important for society in the future, but we're not there yet at the level of reliability that we need to really drive these anticipatory action projects of the Red Cross and the UN humanitarian aid organizations. We need to step up another gear to take us to that level where we're really providing society with important and valuable information. Thank you. Great. Thanks very much, Tim. Are there any questions for Tim? Yeah, I can start with one when we wait for questions on the chat. So when you mentioned about the potential well and the difference in the potential well with the stochastic noise, I was wondering, has anyone done experiments where you increase the resolution of a model, maybe the Lawrence 96 model and it's hard to visualize the regimes of the potential well there, but how does increasing the resolution compare to adding stochastic noise in terms of the different behaviors in the potential well? Well, okay, I'm not sure Lawrence 96. Josh, my student has done work with the Charney-DeVore model, which also shows exactly the same behavior that if you increase, so I should say Charney-DeVore is just a three-component model which has multiple equilibria, but has been shown by Desoarte and others of the years. If you can increase the number of degrees of freedom to make it chaotic, this is a barotropic vorticity equation. And it shows exactly the same behavior that if you add noise, it stabilizes the regimes that will correspond to zonal and blocked flow in exactly the same way. What Josh has tried to do is look to see if you increase the number of modes in the Charney-DeVore model, whether that has the same effect or even more effect. So that would be the increased resolution type of thing. I think the answer is it's not that straightforward. It doesn't have any kind of clear-cut results on that yet. But I would just make the point that when we talk about resolution, there are actually two different things going on. One is that resolution of a climate model. One is that by increasing resolution, you're better resolving the topography of the model, and that's got nothing to do with stochastic physics. That's just the lower boundary forcing. The other is the improvement in the transient eddy forcing, the scale interaction. And that probably is the bit that the stochastic forcing is kind of trying to mimic in some way in a kind of poor man's lower resolution ensemble. I think most of the results from the papers of that I highlighted just now are suggesting that probably the topography issue is probably the one that is dominating in terms of improving regime structure. So if that's the case, it could potentially explain why we're not seeing quite such an impact with stochastic physics. But I think, to be honest, the jury's out. I mean, I feel that the parameterizations of stochastic physics are still fairly crude, and it may be that we're just not doing it properly yet. I don't know. So it's an interesting research question, I think, for the future. Great. Thanks, Tim. Zayn, you have a question on the chat. Would you like to unmute and ask? Sure. I think it's pretty related to Anisha's question, so you can answer briefly. But I was wondering if this issue with underpredicting potential predictability, if you see sort of resolution as the, I guess you're saying it's one promising path forward, but are there other, you sort of just addressed this, but are there other things in model development that we might do, aside from just going to higher resolution to sort of address these issues of maybe regimes, not being as predictable as they should be? Well, I don't know. I mean, the Met Office themselves have done quite a lot of work exploring different possibilities. And they haven't actually really identified, within their current model framework, they really didn't identify anything that would improve this RPC diagnostic. So, and I think their latest, you know, well, latest, maybe it's a year old now, but they also had some indications that resolution was going to improve. They looked again at some eddy, transient eddy statistics, which suggested that was the case. Again, I don't know. I don't, I mean, it would be an interesting experiment if people had the computer time, it would be an interesting experiment to run the, I mean, I'd love to see this done, to run the high resolution model with low resolution topography. In other words, keep the topography at your low resolution, just increase. Does that improve the regimes or is it actually the topography? I mean, that's still, I think an open question. And that's why I think maybe the stochasticity alone, I mean, if topography is important, then the stochasticity alone is not going to, is not going to do it. But apart from that, no, I don't know. And as I say, there was a paper, I think by Adam Scaife, where they went through a lot of different experiments that they tried with different changes in parameterizations and boundary forcing and CI's and all that sort of stuff. And none of it really impacted at all on this RPC diagnostic. Great. Thanks, Tim. There's a question from Shui in the chat. Would you be okay to reply on the chat for that? We are two minutes into the next time slot. Oh, okay. Okay. Thanks again, Tim, for really great talking. Thanks for the question, Zain and Shui.