 Yes, good morning. It's it's great to be here, especially good morning to all of those jet-lagged in the room I very much appreciate it that you're that you're here So what I would like to do over the next 15 minutes is to give a bit of a perspective of what we do or rather don't know About the past and more future evolution of see ice both in the Arctic and in the Antarctic And since I realized that Surprisingly not everybody in this room thinks that see us is the most fascinating thing in the world I thought I'd Have a talk that also has perspectives for for all the other things that we're looking in at here So it'll be a bit of a generic talk And just to get started Just three three pictures of how the Arctic used to look like how it looks like today and how it will look like possibly in the future So this is the Arctic Ocean and summer 1979 It's the first year for which where the continued satellite record started And you see that in this year in the beginning of the satellite record in some of the entire Arctic Ocean was covered by sea ice and then over the next decades that See eyes got smaller and smaller So this is a snapshot from September 2007 eight years ago And so over the past three decades We've roughly lost half the area of Arctic sea ice and the ice has also roughly thinned by 50% So if you think at early morning, but the math isn't too complicated if you have the area and you're half the thickness It turns out that over the past 30 years. We've lost roughly three quarters of Arctic sea ice volume in summer And if you say that we've lost three quarters of Arctic sea ice volume in summer over the past three decades Then it seems to be only question of time until the Arctic looks like that in summer An Arctic Ocean with barely any sea ice on it And for some reason people want to know when this happens And it's a classical problem of decay predictability Because it's not impossible that those two X's up there Are is a number that is on a decade of timescale? And so if you want to know when something happens in the climate system There's one thing to do of course you take the IPCC report you open it you look at it and you get the answer and here's the answer So if you took take the RCPA point five say seem a five CS simulations And we say that Arctic CS is roughly gone when summer CS area drops below one million square kilometer We can just look what models are saying and the models are giving us the definite answer that in RCPA point five Arctic summer see as will be gone at some point between 2005 and 2130 which is a very precise period But for some reason neither the public nor politicians nor scientists are very happy with this answer and they try to Narrow down this uncertainty range And the first thing I would like to do throughout Much of my talk now is actually to look at this question. Why is it so hard to narrow down? The period as to when Arctic sea ice is gone And then in the second part I will briefly touch upon some possible ways wasteful So why is it hard to figure out when CS is gone because there's a very standard method to figure out which of these models is good We can just compare model simulations to observations. So this is the observational record of Arctic sea ice evolution in summer We can take one model shown in blue here, which is a bad model of obviously because if you calculate a trend It's a slight positive trend. We can take another model which has a clear negative trend, which is much better We can say model 2 is obviously much better than model 1 Now the problem is there's also for example at Hawkins pointed out yesterday It's the same model So model 1 and model 2 is both our Max Planck Institute earth system model and simply internal decadal variability Makes this model simulate in one simulation and increase in Arctic sears over the past 30 years and in another simulation Evolution of Arctic sears that is similar to what has been observed So it's very hard to say which of these models is worse because it's the same model And for some sometimes I think we are all drawn to to believe models more that agree with observations by some some metric And so in order to clarify maybe in a bit more detail as to why this might be misleading Because I'm a I'm a very simple person sometimes So I took it die a six-sided die and I said let's assume that the real sea ice trend is a six Which it's just a definition and then I worked hard and produced a diamond Die model and I let it run three times. So it's a three ensemble member here I got a one into three and the five And I worked harder and harder because the mean didn't look right So I produced another die model which gave me a six a five and a seven and then I ran all my metrics over these Results and I figured out that this model is a bad model of a dice And this is a really good model because the mean agrees perfectly well with reality And so maybe the first take-home model which I take home message Which I would believe is obvious, but sometimes it maybe isn't the model the best agrees with observations is not necessarily the best model and To to emphasize this point one more Once more what I've done here is to just look at the trends Calculated from all the senior five models over the past 30 years So this is the trend in Arctic sea ice area in millions square kilometer per decade And each dot is a 30-year trend and I synthetically increase the ensemble size a bit by shifting the start and end dates By five years forward and backward relative to the satellite period To capture a bit more of the internal variability And so what you see here is that that the models have a huge spread in the trend of the past 30 years Decadal variability just makes basically any trend possible not quite But there's a huge uncertainty in what the real trend how the real trend should have looked like and So just looking at at plots like that where you see how large the internal variability of trends is It's quite straightforward to come to another very obvious conclusion met risk with large decadal variability I'm not helpful in evaluating free model simulations. I should have said on decadal time scales and Another thing that sometimes overlooked is also the fact well people say well It's a 30-year trend and 30 year captures climate somehow So there's this notion that if you do something over 30 years Internal variability is kind of gone, but the problem really said at the moment We are in a state of very rapidly changing background state and whenever that is the case 30 years often are not enough to rule out internal variability So another thing we could then do is to look at more specific metrics to figure out which of these models is good or bad One is shown here. It's a histogram of Arctic summer CS concentration So basically if you just look at the right hand bar here The right hand bar just shows how much of the CS and summer in a certain model Is be above 90% concentration in a model grid cell. So there's 90% more than 90% Open ice and less than 10% open water and some models have many grid cells with that high concentration ice Which are shown in red here and some models have very little of this very high concentration ice and summer which are shown in blue here And if you look at these models you you'll figure out quickly Well, that's a really good metric because half the models show the ice should be compacted should have lots of high concentrations Here's in summer. Where's the other half of the model says no, no, that's wrong Here's in summer is relatively low concentration on average So you can throw out half of those models by just comparing those models to observations And we have different observational records from satellite One is shown here. That's the bootstrap algorithm. One is shown here. That's the NASA team algorithm. So these two Final panels here. That's observational records And we don't know which of these is wrong, which is right So these are the observed CS concentration in summer. These are the models one And this observational uncertainty that we don't really know the CS concentration in summer has huge implications for predictions Here's just an example for a seasonal prediction. So what we did there We used our seasonal forecast system in this case. We initialized it in May So everything was spun up and the only difference between two simulations We did here was either using the NASA team CS concentration record or the bootstrap CS concentration record And then we left the model alone in May with these two different CS concentrate datasets And we let it run for four months And this then shows the difference in September sea surface temperature between these two simulations So three months after we had left the models alone in the Arctic They show a three Kelvin difference and sea surface temperature simply because of the uncertainty in CS concentration That we put into the models in May and so another obvious take-home message Observations are not the truth. Even though they're surprisingly often treated as such Observational uncertainty can actually be surprisingly large which again makes it very very hard to figure out What happens in reality because observations are not reality either now there are metrics where observations are Possibly stable or robust and if we combine it with reanalysis here's one example So the mean thickness of March sea ice which we here got from the observation of area divided by the Where we took the reanalyzed see as volume and divided it by see as area and that gives us a metric that many models Failed to match So again, we can say these are obviously bad more so we should throw them out if we look at the future evolution of sea ice Some models agree super well with this metric The problem with this metric is that some model use this metric to tune for So some of this agreement of some of those models is simply because the models aim to match the just this particular metric It's not that this model is particularly good. It just randomly happened that some of these models use March see as thickness as the tuning parameter While many other models don't do that And so another one model tuning really can mask missing physical realism And I think it's it's really crucial especially for the next six activities But also for the cable prediction studies How models were tuned so that we can understand which of these metrics we didn't match by chance But simply because we we tuned for them and then if we look at the cable predictions The one thing that we are all always interested in is of course the memory of the system We tried to figure out in a study that Stefan teacher Did when he was doing his PhD in our group Where we looked at the memory of that in the Arctic system that is controlled by sea ice itself So this is the CS of evolution in that was still seem up three in the a1b scenario And we just wanted to see how large is the memory of CS in such a Simulation and so what we did was to remove all sea ice In 1980 and then remove again in 2000 remove it again in 2020 and we were expecting if we remove all sea ice The ice will remain gone forever because the ice albedo feedback is very strong and we can make perfect decay to predictions Now here's what happened. We removed the ice in 1980 So we forced the model to have no ice anymore and after two three years the ice was just back where I wanted to be And that is because negative feedbacks especially in winter Where outgoing longer variation increases where thin ice grows very quickly? Simply reset the memory of Arctic sea ice very very quickly Which obviously is a shame if you want to do Forecasts and so negative feedbacks make it hard to beat Persistent forecasts if those negative feedbacks sort of reset your memory again and again So these were some of the issues we face when we try to understand how CS might evolve in the future But not all hope is lost. I hope There are certain things we can do to figure out when Arctic sea ice might be gone and One thing that is Getting more and more attention are those immersion constraints for example So those are shown here from a study 2009 My bow at all looked at the trend over the past whatever 30 years or so and then looked how much ice is Actually remaining over this 20 year period here And they find a relationship between those two and then you can just look what the real trend was and you get an estimate of What the future might look like? Now one issue with these kind of Estimates is of course that there are issues with them as well. I don't go into detail about them But that that for these forecasts to be robust. It's really important that the models Don't all give the same simulations And so a bit of model diversity is really crucial for getting these immersion constraints And so it's sometimes a bit worrying if you look at the huge list of seem at five seem at six models And you then go into detail and looking at the individual components That sometimes I think we have a false sense of how much diversity there really is in our climate model zoo Since individual components often there are only two or three around that basically everybody in the world is But to me the probably most promising way forward really is to understand things And I haven't talked much about Antarctic see eyes yet So Antarctic see eyes as I guess we all know is very slightly increasing over the past decades Some people say that it's a significant increase I do have some issues with this term significance because it assumes all sorts of Gaussian unrelated distributions if people do these kind of study So let's just say it's increasing and models don't capture this increase, but if you look at the time series here That's from our same five simulations. Well, there are periods where Antarctic see as is increasing their periods where Antarctic see as is decreasing maybe this is shown even better here, which is a Accumulative density function. We just did hundred a hundred member ensemble with the fully coupled MPI is MLR and these just shows How much percentage has a trend that is smaller than a certain number to be observed trend over the past decades and winter Is around here and around? 20% of our models show even larger increase of our simulations 80% show a smaller increase And that was basically the message I had in the beginning We can't really tell much about model quality just by looking at trends And so the thing we really need to do in order to understand if a model is good and understand Processes to understand why a model differs from from observations And so if we look into more detail, especially in the Antarctic, it's important to look at regional patterns We see that in the observations. We do have this large increase here in the Rossi I want to see Which we don't really see in our model simulations So something clearly is wrong even if we match the overall evolution, okay The regional patterns don't match at all and if we look into further detail and we look at the surface pressure patterns It turns out that in reality or at least in the real analysis the surface pressure in this Region here decreased quite strongly over the past decades And so we got more offshore winds here and these offshore winds just blew the ice out over the ocean And we didn't get this in other regions. And so because we do have this Asymmetry here we do get an overall net net increase Whereas in our simulation everything is super zonal symmetric. And so clearly the sea ice difference that we have Between model and reality are to a large degree Driven by the fact that we don't get the surface pressure right in our model And this then allows us to fix things to make things better to get improved seas evolution in our model and so To to come to the end. I think ways forward a model convergence is not necessarily desirable Since it might mask emergent constraints The other really important thing is that understand and devalue processes in metrics really will help us to to make progress and a lot of the studies that that we are seeing are just Diagnostic and they don't really understand or help us understanding things a Third thing maybe that I think will be helpful is to simply appreciate the fact that we can't answer certain questions Precisely we shouldn't make the impression to the public that we can't tell them when Arctic sea ice is gone And once we are honest and open about that it really takes a lot of pressure from us And now people think oh everything we do must be super policy relevant and I don't think that Not all our science must be directed policy relevant I think your city is really nothing to be ashamed of and I think we can be a bit more Corrective about getting that across possibly also to funding agency even though I appreciate that that will be very hard Okay, so conclusions the observational record is only one realization of an infinite number of possible trajectories and Tense agreement of a simulation Swiss observations does not necessarily mean that that simulation is particularly good Model evaluation and improvement in my opinion is most abruptly achieved through understanding for example understanding of key processes So I'm not even sure if we always need new model studies. We should really aim at understanding the ones that we have And finally I think communication in the direction between models and observations is key for success And that is fostered for example by this workshop, which I'm really really helpful Grateful for having us all here and I'm very grateful to the organizers and thank you all very much for your attention But thanks for an interesting talk I definitely agree with your third point there that we need to understand the process is much better But just on the kind of constraint issue I wanted if anybody's just done a simple constraint where you look at the amount of ice that we have now Because it looks to me like the ones that have the most eyes take longest to melt it. So is anybody tried that? Yes, there are a couple of studies For example by Francois Mastinet who looked in more detail as to if you have very little eyes now And in reality and in the model and it's more likely that that model will capture the real evolution because some models Just have absolutely excessive amounts of sea ice and they will not get it right what happens in the future I think the the thing that I just want to warn against is that doesn't say anything about the quality of the model I'm it just says that maybe those models will help us more to understand how the future will look like, but I Hi, so perhaps I should just say something a bit briefly about the IPCC assessment of CIS extent takes Arctic September CS extent The the the sub-selection of the models was based both on the trends and on the Mastinet study on the mean amount of ice and on the seasonal cycle and it took a huge amount of discussion amongst the various sea ice groups in different chapters to come to this Point at which we could sub-select Some of the IP some of the CMIC 5 models in order to make our assessment of projections and to me This was this was for me What one of the triumphs of my time in the IPCC that we managed to get one projection variable in the whole report? Where we sub-selected the the models so I Think it's right to be critical of that, but I think we also need to think how we can do that for other Variables to do you want to respond to that quickly? Yeah? Yeah, I think maybe we can take that in the discussion section It's just personally I'm not a very big fan of sub-selecting Models because I really think we don't understand well enough why they differ It's one thing to use emotion constraints and say this is a robust physical relationship between one metric and another But to say these models are good and those are bad. I do have some issues with that But we I'd be very happy to discuss that further in detail