 Thank you very much. Thanks for the invitation to come and speak. In the spirit that this is some of these opening talks, a bit of a review, I thought I'd go back and start with one of the first papers that I know of anyway, which kind of discusses the Kedl climate variability a little bit. This is Kinzer from 1933, which has a whole series of long temperature records stretching from the 1890s up to 1933. It clearly shows there was a trend at the time, but also lots of variability as well. Particularly in Greenwich, for example, you can see the Kedl variability. This also highlights that the tropical regions on the right-hand side tend to have less noise, less variability, so even at the time they knew that the tropics were less variable than the extra tropics, for example. We've also been simulating climate variability for a long time as well. This is from the first IPCC assessment report in 1990. It's a personal communication. You probably wouldn't get away with that anymore. This is one of the first GCMs, control simulation of one of the first GCMs in 1990, and you can very clearly see the Kedl variability in global mean temperature in this very first, one of the first GCM simulations. Now, what would be really nice if we could go back and rerun this with rage-forcing turned on rather than the control simulation, so we can do that. This is what would happen, we think. This is trying to make the point that you can see pauses, surges, all kinds of variability superimposed on top of a long-term trend. Back then, we knew that we should expect to see periods of rapid warming and periods of slower warming. How does all this relate to local scales? No-one experiences global temperature directly. We experience it, temperature changes locally. This is one example just showing global temperatures in red and central England temperature, where we have very good records stretching back a long way in black. Obviously central England has a lot more noise. We have very variable weather in the UK, but you can still see the fingerprint of global temperatures on the UK temperatures. You can see the familiar warming during the early 20th century. You can see the cooling or flat period during the 50s and 60s and 70s and then a rapid warming at the end. You can see that fingerprint of global temperatures on a local scale. You can see by the correlations listed on the top here, as soon as you start smoothing the data a little bit, the correlations are extremely high. That's the temperature. That's not true for rainfall, for example. If it's, say, summer rainfall, which is something we might want to predict, is far less correlated with global temperatures than temperature, as you might expect. The correlations here are basically zero. You might say there's something on multi-decadal timescales, but it's much weaker than for temperature. Global temperatures doesn't tell us everything. There's a lot of variability on regional scales. Just continue the history a little bit. One of the first studies, I think, that I found anyway, which discusses how variability might affect future trends. For example, this is from a Clifar document. Clifar has been thinking about this for a very long time. Clifar exchanges in 2001, Sutton and colleagues. The observations have been white at the time, and this is a small ensemble, but at the time it was a large ensemble of three simulations for the next 20 years or so, showing that there was a long-term trend but interacted with variability as well to show that you would get very different trajectories possible in the future. This is what motivated, I think, a lot of the decadal prediction was this type of activity back in the early 2000s. Of course, now we can run much larger ensembles. This is one example from Clara Deser's work with the CCSM large ensemble over Europe again. Most of us are probably quite familiar with this work, but they run a very large 40-member ensemble forward from a modern time. For example, this is European summer temperatures. The top left shows the average trend over 55 years in this large 40-member ensemble. That's the mean. But if you look at the individual simulations, you can see a warmest and a coolest simulation showing a very, very different temperature pattern of trend. This is a 55-year trend. You can see very, very large differences in what might happen. These ensemble members were all started from the same initial condition, just with small perturbations to the atmosphere. This essentially is the butterfly effect, the chaos, causing these divergences in the simulations. That's quite a large change just from that small perturbation in the initial conditions. Just to highlight that a little bit more, there are the time series from certain points in this ensemble. The red line shows the warmest member and the blue line shows the coolest member. You can see there's quite a dramatic difference between the different ensemble members just with a small perturbation to initial conditions. For example, Paris or France or Oslo, the example shown here, you can see very, very different 55-year trends. Whereas for the global temperature trend on the bottom, they're much closer together, but not identical. This raises a lot of questions. Is this true in other models? This is one model they did this in. Is this model realistic? Will this happen in other models? What happens if you perturb the ocean in initial conditions as well as the atmosphere? We decided to test some of this in quite an idealised situation. We used the famous global climate model, which is a very coarse model, which means it's very fast, which means we can run lots and lots of ensemble members very quickly. We did a more idealised situation. We did a 1% per year increase in carbon dioxide for 140 years. We have a long control run, and we picked at random one particular initial condition from that control run and ran 100 members initialised again with a very tiny perturbation to identical initial conditions. You can see the ensemble here for the European winter temperatures as one example on the top right. Again, you can see the differences in trends are extraordinary. There's one simulation there which shows a very rapid warming, another one shows a very rapid cooling over 30 years. The bottom right shows a histogram of trends, of 30-year trends. Again, you can see members with a strong cooling and members with a strong warming, but there's a very, very wide range. To explore what the ocean does, we also ran what we call a macro ensemble where we took 30 different members from 30 different coupled initial conditions well separated in the control run to try and understand what effect that would have. You might imagine, of course, that if you build in more uncertainty initial conditions, then you're going to amplify the spread even more, and that's what we see. Again, we have the same two diagrams as I just had before for European winter 30-year trends. The top right shows the time series. The middle right shows the micro histogram, and the bottom shows the macro histogram where you're just taking into account the ocean initial conditions. The spread is a lot bigger, but also the mean is very different. In the macro case, you're seeing a lot more warmer members than you are in the micro case. So it turns out that in our random selection of our micro case we happen to pick a particularly unusual situation, and I'll come onto that in a moment. If we now look at, again, these average in the same way Clara showed in her work, the average trend, so you can see the left column shows the micro ensemble and the middle column shows the macro ensemble. In the micro ensemble, you actually see an average cooling over some parts of Western Europe, whereas you don't see that in the macro case, which samples the ocean initial conditions. If you look at the maximum and minimum ensemble members, again, you can see almost any trend you like from plus three degrees to minus three degrees over 50 years. That's the individual members. You can also look at what happens if you select your grid point independently. Again, that saturates the colour scale and you can get any result you like just from perturbing the initial conditions very slightly. What happens with rainfall, you can also do this with rainfall and again, you can see a very strong wetting or a very strong drying depending on the ensemble member you look at. What's going on in this particular ensemble? We now look at the time series of European winter temperatures and you compare the two ensembles. The micro ensemble is doing something a little bit strange whereas the macro ensemble is, as you might expect, just a more linear warming because it's a straight 1% per year increase in CO2. What's causing this effect in micro is obviously some predictability and some memory here in the ocean initial conditions. It turns out that in this particular ensemble member the ocean state that we picked, Amock was in a particularly predictable state, for example, so all of the ensemble members had a very strong strengthening of the Amock followed by quite a strong weakening of the Amock and so you can see that reflected in the European temperatures. There's also some predictability here but it's obviously going to be important to sample the ocean initial conditions when you're designing these ensembles as well as just perturbing the atmosphere. Just one final example of what effect that has if we look at the probability of a cooling trend over the first 20 years of the simulations when we're counting for the ocean initial condition uncertainty we see quite a sort of a broad blue colour, light blue colour over much of the world extra topics, whereas in the micro case which is this very specific ocean initial condition we see a very strong probability of a cooling over certain regions in the Southern Ocean and the North Atlantic. So again this highlights the differences between these ensembles and the need to sample different ocean states and as you go further ahead in time you can see they do gradually get closer together and there's less chance of seeing a cooling trend but it takes 50 years for these patterns to converge. One little example if you could also in this very large ensemble we have lots and lots of members we can find lots of members which have zero trend over say a 15 year period even when the CO2 is increasing so all of these four different members have exactly zero trend over 15 years and these are the trends spatially over those 15 years and you can see that the patterns are you can see actually the first two there are almost exactly opposite so there's all kinds of different types of variability trends which can offset the warming in this particular model. I should say this particular model does have a slightly enthusiastic variability the variability is probably slightly too high compared to the real world but the principle is there and you see lots of different types of height as possible in this model and I think that would be interesting to explore a little bit more. How does global temperature go in this particular model so this is the histogram of all the possible trends in global temperature over a 15 year periods and 1.2% of them have a cooling trend for example if you then subsample the 15 year trends following those 1% that are cool so we take the 1% of members and look at the trend for 15 years and look what happens in the following 15 years we see the distribution has shifted slightly to the right so you might expect this to happen this is kind of regression to the mean if you like after there's been a pause you actually see more higher chance of a surge in the following 15 years that's maybe not a surprise and I'm going to finally finish with the point that Gavin brought up this study a little bit earlier on a different topic slightly it comes down to how we compare our models and our observations so the top panel shows quite a familiar type of diagram comparing the observations of global mean temperature with the CMIT 5 ensemble and say that the recent period you see that the observations at the lower end of the ensemble range the red line shows the normal projections from the CMIT 5 ensemble and the top panel just looking at the top panel to start with but of course when we're comparing the observations we're not comparing apples with apples because in the observations we are using air temperatures over land but we're using SSTs a sea surface temperatures over the ocean and it turns out that the SSTs warm slightly slower than the air temperatures we're in the models we're just sampling air temperature everywhere and so this small effect actually makes quite a it makes a difference to this comparison so that's the difference between the red line and the blue line so then if we treat the models as observations and subsample from the simulations where we have observations and the same and we account for the observation type so we blend the SSTs simulated sea surface temperatures with the simulated air temperatures over land it makes a difference and it nudges down the models from the red line to the blue line if you then account for the updated forcings that Gavin described earlier as well the same comparison in the bottom panel so as Gavin said if you update the forcings it nudges down the models towards the observations and now if you look at the blue line in the bottom panel which accounts for the updated forcings that Gavin talked about and this using blended temperatures with the simulated temperatures over the ocean and the S80s of the land you see that the blue line is now much, much closer to the observations and so it turns out this how we do this comparison is actually extremely subtle at some of the details at how we do this so we have to account for the variability we have to account for the forcings we have to account for the fact that we don't observe everywhere and we're not necessarily comparing like with like all the time so in summary the climate and our simulated climate certainly exhibits substantial natural variability in temperature, rainfall also sea ice I haven't talked about that but that's also true our models show a very large diversity in these characteristics and we've talked about that briefly earlier as well and we need to better understand those differences if we're to robustly attribute what's going on in the observations these large ensembles which more and more groups are running now are very valuable tools to explore possible outcomes of regional scales we need to think about how we communicate and visualize this variability as well we need to communicate to the fact that to the public that we don't necessarily expect temperatures to go up all the time and we do need to be very careful when we perform these comparisons on models and observations that we're comparing like with like otherwise you might come to the wrong conclusion there's also a poster discussing in this similar topic discussing the sensitivity to reference periods which is also again quite subtle but also quite interesting thank you