 Yeah, so that was a very nice introduction by Dirk. I don't have to do too much introducing the topic now. So I'm going to talk about some work, kind of looking at the sort of the opposite end of the prediction problem, trying to figure out how far ahead we could potentially predict CI, so if we had all of the components that we want in prediction systems. And this is some work that was essentially done as part of a project in the UK called Apposite, led by Ed Hawkins. And Steph and teacher was also working on this in Reading. But we had a lot of help from a number of groups, too many to mention, but I've lifted them here. Say again? Oh yeah, OK. So as you all know, Arctic sea ice cover has been decreasing very dramatically over the last few decades, but sort of imposed on this very strong trend. There's actually a lot of year to year decade or variability. And there's been some arguments that this variability has been increasing in recent years with very dramatic unprecedented minimums both in 2007 and 2012. And from an end user point of view and from a scientific point of view, trying to understand and be able to know whether we can predict these year to year variations is quite an important scientific question. This dramatic reduction in summer sea ice cover has increased the amount of, as well as allowed, the amount of shipping in the Arctic to increase. So it's increased sort of tenfold over the last decade or so. And you can see here this is a plot by Nat Milia, who's a PhD student in Reading, who's been looking at shipping in the Arctic. And you can see that there's actually quite a lot of activity. So this is density along here. And so you've got sort of the order of sort of 10 ships or so going around here, particularly along the north coast of Siberia. OK, so what's the current state of the art in terms of sea ice predictions? So this is a plot from a paper by Michael Sigmund, Environment Canada. So this is the Cannes-Sips sea ice prediction, seasonal prediction system. On this graph here is the anomaly correlation. So the target month for the forecast is along the bottom here. And this is the lead time going backwards. And we can see is when the trends included, there seems to be relatively high skill, which I guess is not surprising with seasonal forecasts of the September minimum, potentially up around sort of 0.7 or 0.8 when started from May. If you detrend this, however, you get much more modest skill. So and it looks in particular like the skill sort of drops after about after a few months or so. But we also have these features such as this sort of apparent sort of predictability barrier around the early summertime. But we'd like to know sort of whether or not these features that we see in the skill metrics for these prediction systems are features of the real world, or whether they're due to sort of observational or modeling and adequacies. And in particular, trying to understand where the limit of predictability is so that we can kind of manage expectations about what we think we should be able to achieve in the future as we develop these systems is quite important. So that's why this project was essentially designed to assess. So yeah, so we're trying to quantify the limit of the forecast horizon for predictability horizon of the Arctic environment, particularly CIS on inter-annual timescales, in particular, trying to determine the mechanisms and the variables that lead to this predictability, in model simulations and in reality, and then sort of provide recommendations for forecast centers in terms of the developments that they should put into their models. OK, so I guess it's kind of helpful to think about these studies in terms of the hierarchy of predictability studies. So you can do the sort of simplest things you can do with these sort of looking at predictor, predict and relationships. And a reasonable amount of this has been done for CIS now. Let's take a couple of papers here. This Chevalier and Salismelier paper in 2012, in particular, sort of highlighted the role that CIS thickness has as a potential predictor of the summer CIS minimum. But I'm going to mainly talk about these perfect model predictability experiments, which have essentially been used to assess the limit of predictability in models, first used by Griffys and Brian in 1997 to sort of demonstrate the sort of decade of predictability in the North Atlantic. But I'm also going to talk a little bit about observing system experiments, which are essentially designed to figure out where the sources of memory are in the climate system, which is kind of important to know. And these sort of fall between these kind of studies that you can do with observations, these sort of diagnostic studies, and sort of hind cast skill in an assessment of hind cast skill in a season or actual prediction systems. OK, so the perfect model framework, I'm sure some of you are familiar with it, but just to run over it. So it's essentially asking a question of how well climate models can predict themselves. So in the real world, we don't have observations of a lot of the key variables, such as deep ocean, sea ice thickness in the case of Arctic sea ice, and other things. We'd like to know sort of. And so one might expect that in reality, the skill that you get is actually limited by these factors. But we don't have this problem if we try and if we perform sort of predictability experiments in model space. However, obviously predicting the real world with the same GCM is a much harder problem. And so the skill that we get in these perfect model experiments is essentially is likely to be higher than the skill that we'd actually get if we were trying to use that GCM to predict the real world. So in terms of on some design, so we had a lot of groups contributing to this experiment. So I've listed some of them here. All of these experiments were started on at least the 1st of July, because we were particularly thinking about predicting the summer sea ice cover. And we choose a range of initial states. So if I go back to the previous slide. So if this is the control, if we think about this line here as a control simulation, then you pick a. So you take a long sort of 200 year fixed forcing control run and all with sort of present day greenhouse gases and other forcings. And then we run this for 200 years and pick a number of start dates and then add some white noise to the initial state and the atmosphere. And then because the system's chaotic, the ensemble members diverge. And the rate of which this happens tells you something about how predictable the model is or how predictable the climate is in the model. And these were the groups did between 8 and 16 ensemble members to sort of really sort of get some robust statistics. And then they were run for three years to look at the forecast horizon. OK, so these are some plots from the model. So we've got the seasonal cycle of CIS extent and volume up here. You can see that there's actually an enormous amount of variety. So but they do kind of sort of the black line here is that had I SST observations and then the biomass reconstruction for CIS volume. And yeah, and there's a very wide range. Similarly for the CIS variability, you can see here that there's quite some spread, both in terms of the shape of the seasonal cycle and the magnitude of the variability as well. These are the predictability metrics that we get. So we use a couple. So normalized CIS extent, sorry, normalized root mean square error here. So essentially when this starts off very low, because we've got perfect skill or close to perfect skill in the first month, and then when it reaches one, essentially there's no predictability left in the system. And the sort of the black dots indicate when it's not significantly different from one. And you can see that despite the fact that there's very large differences in the mean state, the predictability in the models is actually fairly similar. I mean there are some differences in the details, but it looks like at least some of these models, they are sort of predictable throughout the full length of the simulation. Others not so much. In particular, this E6F model here, which is a model that came from RV. It's only predictable for the first summer, and then the second and third summers have no predictability at all. But it continues to be predictable in the winter throughout the length of the simulation. The anomaly correlations here show something quite similar. I see predictability at long lead times for some of the models, and then sort of shorter lead time in others. See, I's volume on the other hand tends to be a lot more predictable. Some of the models, but again this E6F model which suggests that volume's predictable for more like a year and a half, and some of the models, it looks like volume's predictable for all three years. So we can look at the spatial maps of this, and unsurprisingly most of the errors occur around the marginal isone, and these plots are essentially, what the differences between these plots are essentially looking at differences in where there's CIS concentration variability in the model's climatology in September. And this is the same for ice thickness, so we have quite large errors around the coastlines where any sort of errors in the wind field result in ice piling up along the coastline and sort of magnify errors there. Okay, so it looks like there's a lot more predictability in these models than we see in state-of-the-art forecast systems, but we'd like to know why this is, so there's some large gaps in both the atmosphere and the ocean-observing systems in the polar regions. Satellite-derived CIS thickness products are becoming available, but they've still got some sort of problems, so satellite altimeters like Cryosat have problems with thin ice, so they're only really, so they have sort of the operational ranges, mainly for ice over a meter thick. Radiometers such as Smos can only condit-distinguished ice, which is thicker than half a meter. And then crucially, there's issues, whenever there's a melting surface, so the remote sensing of CIS thickness or freeboard and CIS thickness is quite difficult throughout the summer, which is probably when we'd most like to have the information. But for which components, the largest source of predictability? So we've done some experiments looking at the role of CIS thickness to try and assess how important it is in terms of model initialization, so we were essentially around another set of simulations with the HadGen model, essentially twinned with the experiments that we did with this model before, and then we re-run these perfect model simulations, started both in January and in July, except we replaced the initial state in this sort of perfect initial state in the model, the CIS thickness is replaced with the model climatology everywhere, but everything else, such as the ocean and the atmosphere initial conditions are all exactly the same as the perfect one, so we're just looking at the impact of degrading the memory and the CIS thickness. And this is what we get. So this is the normalized root mean square error again. So the solid lines are the pair of forecasts which are started in January. The black line is the perfect forecast, and the red line is the one where the CIS thickness was removed. As you can see, there's actually quite a, they're actually very similar through the first six months of the forecast, starting in January, and then they're slightly different in during the summertime, but if you start these simulations during the melt season, then essentially the predictability almost disappears. So looking at the difference between the black dashed line and the red dashed line here, there's an enormous reduction in the skill just from removing any knowledge of the thickness, but it's very dependent on the start date. So why is that? So we can look at some spatial maps to try and understand this a bit better. So this is the root mean square error of the CIS thickness field, so we've imposed these errors, so it's not a surprise that they're there, and they essentially is the error in the unperturbed simulation increases, then the difference in the errors between the two simulations decreases, but it doesn't have much impact on the CIS concentration field, certainly to start off with and then through till March, and in September, we sort of, when the CIS sort of contracts and these CIS thickness anomalies affect the pattern of melting, then we get these, the sort of imprint of the thickness errors can be seen. Okay, so when we start simulations in July, it looks quite similar in terms of the change in the thickness errors, but crucially the CIS concentration field, the errors are actually much larger than they are in January, almost initially, because essentially any errors in the thickness are actually changing where the rate at which ice is melting as we progress through the melt season, and essentially it removes all skill that we, all of the memory, the results in sort of skillful prediction of concentration for the start, but then essentially it's only seems to be an issue for the first summer. Yeah, okay. So these are, this is, so in terms of the atmospheric impact, it seems to have quite a large impact on the skillet during the first month, particularly when we initialize in January, and it's much more, so this is the increase in percentage increase in error, of the mean sea level pressure field. Okay, so in summary, looks like there's potential skill for, potentially skillful predictions of CIS extent for one to two years or so in summer, longer in winter, much longer in predictability for volume, but model biases and a lack of complete observations really reduce the skill when predicting the real world, and I think that CIS thickness is, we demonstrated that CIS thickness is particularly important for improving these forecasts, and this data set is available, the predictability data sets have been available at the BADC if anyone's interested in looking at them for other things. So I'll just leave some discussion points up there.