 So, Antje has a joint appointment between ECMWF and the University of Oxford, where she's a senior research fellow. Antje is interested in the predictability of the earth system on a range from time scales from days to weeks, months and longer. She also works with stochastic permiturizations, has worked with multi-models, and is flittering up and down the time in spatial scales, a bit like me. And I'm very delighted to introduce you and look forward to your talk on the cadence scale variations. Thank you, Judith. It's an absolute pleasure to be at this meeting and it's a pity we don't see each other in person. When I joined ECMWF many years ago, Judith was there, and then we overlapped for quite some time and there were several similarities between us. And it's nice to get back and touch every now and then. Same as Anish. Anish was in Oxford in our group years back. And it's so lovely to see you both working together and sort of joining in. Anyway, I'll start sharing my screen. Can you see that? Yes, perfect. Okay, so let me arrange these funny bars here. So I talk a little bit about the seasonal forecast of the 20th century and this is quite a wide-ranging topic. I thought for this lecture here the focus is on the questions of the North Atlantic, the exotropical circulation, in particular how confident we are about predictability estimates of the winter, and they all on seasonal timescales and I would like to acknowledge the contributions over many years from colleagues in at ECMWF and in Oxford, like especially Chris O'Reilly, Dr. William Palmer, Dave McLeod, Damian, Dick Crimia, Dr. Stephanie Johnson, they all contributed in one way or the other to this work and we're still collaborating on many of these things. So when we talk about seasonal forecast of the exotropics, then what we really aim to do is we aim to forecast a distribution that is sufficiently discriminating the inter-annual signal from the climatological background distribution. And the schematics below indicate two situations on the left-hand side, perhaps what would be ideal, where the blue line indicates we have a climatological distribution and then our forecast distribution is ideally quite narrow and it's moved away from the climatology and it's ideally also very much centered around the observations. But then if we look at our problem in the exotropics and here in Europe, especially, we often face this situation that is quite different. Our forecast distribution is not necessarily any narrower than the climatological distribution, also the shift away from climatology can be quite subtle, it's not that much really and the observations very often is not at the center of this distribution at all. So it was quite an exciting time in the last decade or so for seasonal forecasting and you might have heard about the work from colleagues at the Met Office here in the UK, where they published a paper, this was in 2014, which made headlines in this year on the left-hand side, this is from the Times, the daily newspaper, with the headlines like, forecast has cracked the formula to predict long-range weather, which several of us made us a bit surprised and thought we missed something also. We have been working on this for quite a long time, but it all goes back to this paper by Adam Scave and colleagues from the Met Office who showed here and the plot on the right indicates that forecast of the winter NAO of their seasonal forecasting system had skill in the, for the ensemble mean correlation skill of 0.62, which is quite high for the exotropics and that made these big headlines with this glossy forecasting system. We can now predict the state of the NAO quite well for the next winter. But we should say, I mean, there was lots of discussion about this paper and I just mentioned the fact that, for instance, the data here, the orange lines are the dots are the ensemble members and the black line is the verification for the NAO index and the applied calibration that is a little bit questionable and reason for this is that the signal to noise ratio is really, really low. We have a lot of variability and noise, so to say, in the forecast and the signal that you want to predict the variability and the ensemble mean from year to year is really quite low. Just before I go on, I just want, I can't help but to say that it's not really the case that this was the first time this level of skill was found. And here I show you something from at the top. It's a paper by Wolfgang Müller and others. This was a time when Wolfgang did his PhD a long time ago, as you can see. And he was part of the Demeter project, which was a European project across several institutions, countries in Europe to look at seasonal forecasts and the multimodal sense framework. And there, if you see, just look at the underlying values there. We chose the correlation skill for the NAO in winter and from the multimodal and Demeter, we had levels that were even higher than the ones that the Metro was now reported, nearly 0.7. The reason I think at a time people didn't make a big fuss about it is that we also notice that this skill depends very crucially on the period you're looking at. And as you can see, the high level of skill here, 0.7, was for 1987 to 2001, but when the Demeter extended the hindcars back to the late 1950s, and over that period the skill disappeared. And that made people at a time quite vary. And here the plot at the table at the bottom shows a sort of more up-to-date analysis of both the Demeter data and then from the follow-on project, from the Angst-Sambels project. And we notice a similar thing that if we look at the most recent decades, the skill was quite high, partly significant, but if we use the same model and extend the forecast for it to include earlier decades, the skill went down a bit. So just to summarize here from what I've been trying to say, the seasonal forecast of weather and climate over the Euro-Atlantic region, especially, they are really difficult and they are difficult for several reasons. One is that this signal-to-noise ratio of the predictability in the extra tropics is generally quite low compared to the tropics, for instance. And the teleconnections from the tropical forcings, which is our major source of predictability, they are quite weak. If you think of just the location where the tropical Pacific and Enzo is and where Euro is located, these signals have to travel long way and there are lots of things that can go wrong and do go wrong. And another big issue is the sample size and this is something I'm going to talk about more. We're dealing here intrinsically with small sample size and these are mainly limited by the number of the observed seasons, which usually for operational seasonal forecasting centers is of the order of 30 years, could be 40, could be 20, roughly 30. It's not so much the size or the ensemble, the models, because we can create as many members as we want, but it's the verifying observations that is the limitation here. So estimates of seasonal predictability, skill and also reliability suffer from quite large uncertainties and I'm sure you're all aware of the pitfalls that you can experience by using statistical metrics, measures and especially the correlation here I just want to show this because it's such a nice example is a very famous example from the 70s from this is called the unscombed square or the unscommed from the unscommed paper, which indicates here four pairs of X, Y variables, and they are created such that all where Y variables have the same mean and the same variance. The sample size here is 11 for each so roughly the same order as our seasonal hindcast forecast data. And also the correlation between X and Y in all cases is 0.8 to fall sample sizes but you can see the distributions of these variables are very different. We have sort of a more normal distributed well behaved data set on the left hand side and then we have something that is perhaps more non linearly related on the second left. Then we have perhaps a perfect linear relationship except one outlier, the third examples here and then we have no relationship and all but one data points really. And, and still all of these with small sample size give us a same high correlation, which is just a warning for everybody and I'm sure you're all aware of this. That we have to be cautious with such simple statistical measures because they can pretend a relationship that is not really supported in the data. So the problem I would like to talk about the subject here is really can we overcome the sampling problems by using substantially longer historical periods, and I'm going to talk about data sets where we explored. Exactly this substantially longer historical periods for seasonal forecasting for the for the hindcast period. So will this solve our problems there and related question is then how robust are seasonal skill estimates, if they were tested during independent past hindcast periods that we validate them there. And specifically focus here is can seasonal forecast models successfully predict the inter annual variability in earlier decades of the 20th century, the 20th century is our periods that we use for these longer, longer hindcasts here. And then based on this, we might wonder what are the implications for future seasonal anchors we always look at the hindcasts, but in the end of course we want to learn for for our future for actual predictions forecast. And I think that understanding the success and also the limitations of predictions in the past with current generation of forecast model. This will increase our confidence in any of the really the future forecast or the focus of the future seasonal forecast for today and for next year's forecast this is really the motivation behind behind all this. So let me let me go into a little bit of details what we did. So we basically run two sets of seasonal hindcasts covering the 20th century and a little bit more. And the first set we did was, we use the SMWF atmospheric model, but didn't couple it to to ocean and see eyes components but just gave a prescribed SSDs and see eyes to see whether it produces anything that is reasonable for for the 20th century, especially for the beginning of the 20th century. And that was possible due to the advent of the long reanalysis from the SMWF atmospheric reanalysis of the 20th century called era 20 C. And then later on when we when we saw that this was quite successful and it does does not completely score up the system and we went, we went to a fully coupled system, where we, yeah, we run sort of the atmosphere together with the land, ocean and see eyes components. And the initialization for these models with for these hindcasts was gratefully possible, because in the meantime era 20 C was updated with a couple reanalysis so this is the SMWF first coupled ensemble of reanalysis of the 20th century. And called Sierra 20 C. So all these simulations were done with a model cycle from of the SMWF model that sits between system four and system five, because it was done before system five and operational, but it's quite close to system five. It uses, though, the horizontal and vertical resolution from system four, so as the linear grid T255, and it uses a one degree ocean model similar to what system four used. We have a quite a large ensemble of these hindcasts with 51 or for some started 25 ensemble members, and we run them for all the four major seasons by initializing on four dates each year and first of February, May, August, November so we can cover then four months forecast the main seasons. And we did that for the period from 1901. This is when the reanalysis started, including a forecast initialized in 2010. It gives us 110 years of fine cast data which is back to three roughly larger than the standard operational hindcasts. The focus here will be on the winter time but as I said we have this for all the seasons here. And just to show you very briefly that that these these focus are not complete nonsense for the earlier parts of the century we see here. The global mean temperature in in DJF for a red the simulations with prescribed SSDs and CIs the atmosphere only simulations and in blue, the fully coupled one. I should say. This is in this bumps paper is indicated here. We compare this coupled and uncoupled simulations and we gave them names. So ASF 20 C is the atmospheric seasonal forecast the prescribed SSD CIs ones and CSF 20 C is the couple seasonal forecast at 20th century. And you'll come across these labels perhaps later on as well. This looks all quite reasonable in terms of the long term trends and variability patterns of variability. There are some discrepancies but overall the picture is, it's very encouraging. And for seasonal forecasting, of course, we very interested in in in the tropical Pacific, this gives us our major source of predictability from Enso. So, how do our answer similar our tropical Pacific simulations look like. And we look here at the, the, the absolute SSDs and the top part of the plot and in the correlation skill for Enso forecast for the new 3.4 index for different start dates throughout the year and different simulations throughout the year. Let me talk you through. So if you look at the top part we see in the gray line. This is our observed temperature and that part of this central tropical Pacific. It has a seasonal cycle as you can see the peak is around May. We see in in color green, yellow, red and blue, our different start dates initialized in the beginning of February. And then these are the dots here monthly means for February and and the following months. If you look at the legend here we see that the, the, the dashed lines are the ones from system five, the operational ECMWFCs now forecasting system. The dotted one are from our low resolution version of system five or similar to system for this. Yeah, the low resolution version from system five and then CSF is our coupled forecasting system with this initialization from these long reanalysis that I just described, and you see that we're doing okay with our way of initializing the, the forecast, I should, I should say I forgot this from the previous slide. The way these reanalysis are created as such that they only assimilate in the atmosphere, the near surface data for sea level pressure and winds over the ocean marine winds. They don't assimilate any satellite products. They don't assimilate any upper air observations really. It's a bit like the, the, the component at all reanalysis in the US. And for the coupled similarly atmosphere and for the ocean it's assimilates sort of three dimensional fields of temperature and salinity as available. If you look at the lower part of this plot for the correlation, the skill, how good are these forecasts. I hope I can convince you that that our focus with our system, CSF system here are not drastically different than any of the either the operational or the low resolution operational system, which is what we should compare to because we know the resolution and so matters. And so, so we, we get a system or sort of set up seems to be doing reasonably well. This is for the overlapping period from 81 to 2009. So this gives us quite some confidence. We really want to know how the spatial structure of our skill for the SST looks like this is summarized here and in plot A at the top left, we see the performance of a low resolution system five the operational focus but there's a low resolution system and low resolution means same resolution basically or similar resolution as our long hind cast. In B, panel B, we see the operational system, the high resolution and in terms of SSTs, the overall the picture is very similar there are some differences here and there. And I'm not going to talk too much about those. It's just to give you an idea what the impact of resolution here is. And then in panel C we see the same plot correlation maps for our CSF 20 C the long hind cast. Again verified for the same period in order to be fair in the comparison. So you see between those that they have sort of spots of skill in the SSTs and DGF here, roughly same places. It looks fairly similar so it gave it lots of confidence that that our focus globally not only for the answer region are doing okay. And for comparison, if you compute the skill over the full period that we now have 110 years. This is the plot at the bottom right and panel D, and there you see quite some differences. You still see good levels of skill for ENSO. And you do recognize sort of some of the teleconnections in terms of SST in terms of skill in the North Pacific and then the Atlantic parts of the Indian Ocean, but they're a bit reduced and we'll come to this later on a bit more in detail. So if we talk about the extra topics, and especially the NAO, then we mostly looking at the flow in the NFE troposphere and the jet variability here. So that here are similar plots as before but now looking at the geopotential and 500 hectopus cow. What's the skill there. So the top plot is the operational skill for the period 81-2009 from ECMWF. And you see the familiar picture that we get some skill in the extra tropics over the North Pacific and over North America, but the region of Europe and the North Atlantic is traditionally it's very difficult to get high levels of skill there. And we, yeah, it's notoriously difficult areas I tried to explain in the beginning. So if you look at panel B and C now, these are from our long hindcasts. In one case is prescribed SSTs, panel B, and fully coupled and panel C for the same period. And you notice that the levels of skill also in the atmosphere, not only in the SSTs, TNZ500 are also very quite similar to the performance of the operational system. And then the two plots at the bottom show the Z500 skill if you computed over 110 years. And you see that we lose skill, the overall levels of skill are lower. But we do see a similar structure of areas with particularly low predictability, more skill over Europe and higher in other parts of the extra tropics and towards the North Pacific and North America here. So this brings me now really to the NAO. I don't think I need to say much about the NAO as a mode of variability in the North Atlantic region. The index we use here just briefly is based on an EUF approach for Z500 over other part of the world. And if we compute the skill how our forecast model and DJF is able to reproduce the NAO index, we end up for the last 30 years with a correlation skill of 0.44. So this would be a comparable value to the one that I mentioned from the scape at all paper that made the headlines, they had 0.6. So our system gives 0.44 here, which is within the range that we get from more of the slight variations of the ECMWF system. So and then what we see here, the curve, the blue curve is really how that skill varies over time. If we move our window over which we compute the correlations, which is indicated by this gray bar at the top, if we move this by one year all the way through to the beginning of the century. And the way this is plotted is such that we have a 30 year window and we plot the correlation for that window in the middle of that 30 year window. So the last data point represents the period from 81 to 2010. And the very first data point on the left hand side represents a period from 1901 to 1930. The gray thin lines give you some indication of error bars, confidence intervals here, they are quite large as you can see. But the interesting behavior that we found is that if we go back in time and compute the correlation skill for the NAO, we notice a period around roughly the middle of the century, 1950s, 1970s, where the skill is quite a bit lower than either what we observed for the most recent periods, but also, and this is quite surprising for us, also for the earlier periods. If the skill goes down in the past, you might think, okay, you know, our initialization is not so good, we can't, you know, the model can't perhaps predict it so well because of that. But this argument is really not valid if you look at if we go even further in time to the 1920s, 1930s, where our observational coverage is arguably worse than in the mid-century decades. But our skill goes up again and the skill in around the 1920s, 1930s, beginning of the century is as high as it is for the most recent decades. So this was quite a surprise to us. This was for the uncoupled simulations, we did the same thing with the coupled simulations when they became available, the CSF-20C simulations. Yeah, you know, we started off for the recent decades with a slightly larger level of skill, but given the error bars you see that that is probably within the noise. And then see something broadly speaking similar that we have a period where the skill is really not significant, don't have the error bars here, but it's basically disappearing at all in the around the 1960s, which of course with our 30 year window covers quite a lot of years on either side. And then again for the earlier years, even the coupled forecast, we produce skill levels that much closer to what we see for the for the recent decades. I think I should hurry up a little bit so I'll go quickly over the next few plus but here in green you see the NAO index itself and there's quite some interesting links there with the variability of the NAO and the predictability of the skill of our hindcars there and in this paper I should mention maybe I should quickly go over this. We looked at some of these and a bit more detail I just want to highlight here a few aspects. The ensemble mean correlation is a good first starting point but we would like to explore the ensemble of the full probabilistic aspect as well. And the plot on the bottom left shows a rock skill score for different thresholds and with our large data set we can do much more than is possible for for shorter data set so we can look at we look here at the person tiles and at 5% steps. On the right hand side are the very negative extremely negative NAO indices and on the right hand side the extremely positive ones and we can see that for especially for the extremes and both both ends the negative and the positive and NAO indices we get significant probabilistic skill in terms of the rock area here. And we also noticed on the right hand side here on the bottom. Interesting behavior. If we look at the upper and lower tiles and how sort of these these phases of the NAO behave differently over time so that the red curve at the bottom right plot shows the upper tiles in the positive events. We have especially high predictability for, you know, from the 1970s onwards perhaps and less so in the middle period, whereas the negative NAO events here indicated by the lower teresile curve the blue curve. It's perhaps a bit. In the sense that we hardly have any skill for the most recent decades but more during the decades when the, the upper teresile hasn't. And then again we have in the beginning of the century it's, it's, it's neither nor it's sort of a mixture we did the, the covariate quite quite well in terms of the level of skill. So it's quite a complicated picture there, to be honest. I'll jump over this here and just mentioned briefly that we did a work and then later I found this paper with lecture on Saunders that that's quite old by nowadays standards, where they used a statistical purely statistical approach to forecast the NAO in the winter over basically the same period here you see from 1971, in that case, and they also used a running 30 year mean window to compute that so quite similar in a way they, they produced these plots here for the correlation skill and the interesting thing is, even though there's a completely is a non dynamical approach statistical relationship between the sub polar temperatures and summer and the winter NAO, and they find similar interesting behavior that the skill, you know, is reasonably high here at the moment in the most recent decades and then they see a drop the dash dotted line mostly here in the middle century and then it also the statistical relationship recovers for the earlier parts of the century so that that's an interesting indication that perhaps it's not all just a model thing, the behavior that we see in our NAO forecast but there might be more to it. Okay. We just to say, we then try to look at individual years which, which contributed to most to this, this correlation skill that we found in our hind cars. And this is an approach here where we try to quantify this, if you look at the way how the correlation coefficient is, is computed you, you, I mean, you know, it's, it's basically a sum over contributions from individual years of the anomalies from the observations multiplied with the anomalies from, from, from the model from the ensemble mean here. And this curve here that you see here the black curve is the peaks the arrows here is, is basically only this product of the observed anomalies and the model anomalies computed plotted for every year. And the sum and then normalize with the variances is the correlation coefficient which is the dash dotted horizontal line. But this way of looking at the correlation contribution of the covariance here enables us to indicate individual years that contributed most to to the skill that we find and, and engineers is basically most of the time is relatively moderate but then you have a few peaks and these are interesting years where, where the skill peaks and if we look at what happened in these years so these are the five years that contributed the most that had these peaks. We see, and we see that in out of these five years and four of the winters, we had a strong positive Z 500 anomaly over Greenland, which is associated with an negative and a O state you can see that in the 90s 30s 40s 76 another winter. And the, the winter 2009 10 sort of more more recent one people might remember. So these all contributed to today's skillful predictions in the hind cars really. But there's also one year is the strongest positive and a O year that also was very well forecast in the model and and contribute to skill as well. So again as I said, the picture is a bit more complex is not just the one phase it's it's probably the extremes that contributed from either phases to the skill and, and, and, and, you know, show us so as the overall skill behavior that we see, but you also see that they can be quite scattered across the century. So if you only look at the last few years and you, or the last, say 30 years we wouldn't, you would, we would get perhaps a picture that is not quite so representative. I'd like to say a few words about the signal noise problem because that is, that is important for the North Atlantic region you have probably heard about the RPC and the ratio of the predictable component diagnostics, which is something the colleagues came up with in a paper by eat at all in 2014. So they basically relate the predictable components in the observations to the predictable components in the model world and this is done by this this formula here. I don't really have much time to explain it but there's lots of literature and I have a references at the end as well. So the idea of this this ratio is really that for a perfect model ensemble we would have an RPC of one. So the predictive components in the observations in the real world and in the model world would be similar. And if, if we have a situation where RPC is larger than one, then we have an over dispersive system that is showing under confidence, meaning that the model underestimates the predictability of the real world. And the opposite case where RPC is smaller than one, we're in a situation, we would be in a situation that we're quite familiar with from seasonal forecasting that we, that we overconfident we, we don't have enough spread analysis and then the, the model predictability is, is perhaps indicating is perhaps larger and then in the real world. So what, what they described in that paper eat at all 2014 is that they find regions in the North Atlantic here indicated by these warm colors, red colors where the predictable component of the observations is, is really quite a bit higher than the models own predictable components. And there's several papers been written about this in the recent years which which I listed here question it really is this is a bit of a paradox. Because, go back, because this is not what we would expect that that that the model itself is can predict itself less well than it can predict the observations. So, am I here. So how does that look like an hour long handcuffs and what can we learn from those we see here in blue and red the curves, the, again with a moving window computed the RPC values and these long simulations for the index. So around one and this is where my dotted lines there is this is the perfect model, where, where the predictability in real world and the model world are equal. And then we see some periods where this RPC is indeed larger than one, but also mind the large arrowbars to quay, the quay lines there. And we also see in the past we see periods where it's very close to one around the mid century periods is not substantially larger than one. So, from these long hind cast you and you can estimate from 110 years the RPC value, you end up with a value of one nearly a nearly perfect one for the uncoupled simulation and roughly 20% above one for the coupled hind cast, which, which is nowhere between the two or three that is reported in some of the shorter seasonal hind cast from the metaphors, for example. So, we would say that the predictable components of the observation in the model they are not in disagreement, if we look at it over a much larger than 110 years in the seasonal hind cast. However, we should notice, we should acknowledge that there's quite some substantial amount of not very well understood multi-decadal variability in the behavior there and there's lots of activities and people curious to try and understand what's what's happening there. And that that is not only for an index and a time series of this index and behavior over time, but how that's basically the spatial signature of it. We see here a way of doing this, and there's more in this paper here, where we look at different periods contrasting so to say, for instance, if we go to the third row, it looks at the period 81 2009 is the sort of the most the more recent hind cast period. And on the left hand side, we find our skill that I similar to what I showed before, we have this sort of lack of skill over Europe basically. And on the right hand side is the model predictability, the perfect model approach and and it's it's higher in that case here that gives us this problem is the RPC. But then if we go back, for instance, to the period, the row above here 42 9070, the picture is very different. And so we basically have no skill in the areas over Greenland and North Atlantic at all. Whereas the model, the perfect model approach is quite stable in time. You see that for the earlier periods as well. So it's much more robust and gives a much, much less variable. So basically more of us indication of the predictability whereas the observed predictability shows lots of variability also spatially over over time here. So I'm here. Yeah, I don't know how much longer your presentation is we would like to have some time for questions. I'm very much aware that I, I quickly. I jump over the slides I just go through to the points where I want to say a bit more. Perhaps PNA at one point. Sorry, Judith. Yeah. No problem. We find a very interesting behavior for the PNA skill as well. And PNA is much more predictable for current decades as you can see here, but then we see very, very similar behavior, dropping to nearly negligible levels at all in the mid century period and particularly covering for early appearance for earlier parts of the century, which then raises the question of like the link to Enzo and I should say that I have a few plots here how Enzo was forecast there but because I'm running out of time and I was expecting I was run out of time. So I thought I talked separately about the Enzo part of all this in the science workshop that will happen in the first week of August. So if any of you is more interested in the sort of multi-decadal predictability of Enzo and how that varies. And I would like to invite you to come to that talk where I'll focus, especially on Enzo. Yeah, and just a very, very last point because that would link with Anish's talk about stochastic parameterizations perhaps a bit and the role of stochastic for things. There was a very nice paper recently, two papers basically by a group from Kiel in Germany led by or it's a sort of rigid great batch group and these were two students in his group, Ola Wolff in GIL 2017 and Ola Riecke and the science led us this year, where they looked at tropical forcing of the summer East Atlantic pattern. So it's now sort of different pattern not the NAO but summer East Atlantic. But, especially this paper, the second paper here looked at the non-stationarity of the link between the tropical forcing and this exotropical teleconnection. And they looked at, they find this multi-decadal variability, but similar to what we see for the NAO, and they come up with a simple stochastic model, like linking the driver with the remote index and sort of the tropical forcing and the remote index in East Atlantic here. And then by introducing some noise here and noise levels of really 0.2, quite low, enables them to end up with this multi-decadal fluctuations in skill, which is a nice way to try and think about explaining these multi-decadal skill levels that we find multi-decadal variability in skill. I really like to come to conclusions. Sorry, it took a bit long. So I hope I could show you that these long seasonal hindcars from 1900 or one to 2010, they provide a test that for estimating seasonal predictability during distinct recent climate periods and very much think that understanding the success and the limitations of predicting the past will increase our confidence in the future and the climate of the next 30 years, very unlikely to be the same as the climate of the past 30 years, of course. We found some evidence for multi-decadal variability for both the exotropical and the tropical, didn't have time, but I can tell you it is, forecast skill with some pronounced drop of skill in the mid-century decades, but then again higher levels of skill at the beginning of the century. And the skill reduction cannot simply be attributed to poor observational coverage. Instead, we would hypothesize that changes in the predictability, they are linked to intrinsic changes in the couple climate system, whatever that means really. But the main sort of conclusion from this is really that short hindcars periods are not representative for longer term behavior due to this decadal climate variability. And we suggest that the mid-century period stands out as an important period where we should perhaps test the performance of our future seasonal forecasting systems a bit more. And once again, achieving good focus skill for the past for recent decades is not the sufficient condition for guaranteeing similar good skill in the future. And with this, I'd like to finish and I have a few references here that maybe if these slides are shared, people could look up later. Thank you very much, Anke.