 It should go now. It looks like, coming down. Yeah. This is also a connection, right? I don't know if that number first was going there. Yeah. Let me try it with, sorry. I don't have a HDMI hole in this side of the Mac. It's because this thing is changing. It's not got a screw. OK, nobody breathe. I'll move. OK, if we have an earthquake, it's going to go down again, I'm sure. OK. But I do have to press this. OK, so I want to look at S2S skill improvement. So I just asked Fred if you'd had an advert, though, first of all, I want to have a couple of advertisements about our exchange programs. And Fred said, which exchange program? So this is for Fred, OK? Little advert. Just in case you haven't already seen on our website, we have a number of opportunities for scientists from the South, developing countries, to actually come and visit us, not just on these shorter workshops, which are rather intense, as you've probably experienced these two weeks. But also to take part in, for example, we have an associate program you might have seen, which, if you get selected, it's for junior. And also senior scientist allows you to visit us three times in a period of six years, funded by ACDP, normally visits of around two months with some flexibility. The STEP program, if you have PhD students that have a local registration in your country, but you feel that their research area overlaps with the expertise in ESP, you can also apply to the STEP sandwich program, which means there's joint supervision and there's a six-month visit per year for the three years of the PhD. And we also have a diploma, which is what's been keeping me away from you a lot of the time these two weeks, because I've had quite a lot of teaching on that. One year intensive, master's-like diploma pre-PhD program, which we have fully funded participation for 10 students. That's the first advertisement, details on the website. The second one is that if you've enjoyed what you've been doing these two weeks, please consider participating in the sub-seasonal to seasonal prediction session that's being organized in the next EGU, which is in April of next year. Two deadlines. One is the 1st of December, which is when you need to submit your abstract if you want to apply for travel support, but junior scientists. And otherwise, there's 10th of January deadline for actually just abstract submission if you're self-supporting. OK, that's the end of the publicity. So you've seen this kind of thing already in terms of the spatial scales and the temporal scales of various phenomena in the atmosphere. I'm not going to dwell on it too much, just an introduction. So when we're looking at basically hours, we're thinking about nail casting. When I want to cycle home, I look at the local radar. And if it's working, then it tells me when I can leave my house or leave my work and arrive dryly. Not so important here, as it was in the UK, of course. We have shots in the median range out two weeks or so, looking at synoptic scales. But this talk is basically going to be looking at the sub-seasonal and seasonal scales. So these are the kind of time scales that you've been looking at. And you've been hearing a lot about NEO and MTO and ENSO on those time scales, which we want to be predicting with these systems. And of course, at the top, there is Decadal, the CMIT6. We'll be having a vastly expanded range of experiments looking at initialized Decadal prediction, which is also an area that Fred has worked on. So we're focusing on these two. We talked a little bit last night at dinner about these kind of graphs. So the sub-seasonal, of course, is slotting in between these two. And I actually, I looked again. In fact, I wasn't quite right. This is another version of that, where we have the forecast lead time. Now you've all been using the S2S databases, so you're aware of what the lead time is referring to, the number of days after the forecast has been initialized and let go. So on this graph, we have the forecast scale. This blue line is representing the scale dropping off in the, should we say, the deterministic range. And then we have another couple of lines for the sub-seasonal that's slotting in here and the seasonal. And this graph used to actually start with these lines at zero, ramping up and then ramping down. That's what used to particularly upset me. So already this graph kind of introduces nicely what I want to talk about today. I mean, we have to think about why, for example, with this purple line for the seasonal forecast, why would it lie above the blue line at day 40 lead time? Because I'm teaching a lot on the diploma today. I mean, interactive mode. So you're not getting your lunch until you shout out answers. So if you sit there quietly, you won't be eating. So from the participants, why could you expect the purple line here to have a higher skill than the high resolution deterministic? So this is the highest resolution, lower resolution, lower resolution still. So it's got a lower resolution. Why should it be better at these ranges than the deterministic system? None of the lecturers are allowed to respond. Any ideas? How can you actually have this skill? How can it be jumping up here compared to that? For example, here, if we look at the slide here, it says sub-seasonal forecast predictability comes from monitoring the Madden-Julian oscillation, land surface data, and other sources. So you've been hearing all about that also in the previous talk and earlier on in the week. Seasonal forecast predictability comes from sea surface temperature and the ENSO state, so the La Nina ENSO. But why would, for example, the seasonal forecast be better at ENSO than, for example, the short-range deterministic? Any ideas? There are more control by both of them that we're producing than by initial conditions. OK, but what is it about the boundary conditions that this system does well that that one doesn't? OK, so this one has basically this couple to an ocean model. Maybe the short-range doesn't have an ocean model. So you have to think about the reasons. When you look at these kind of charts, that makes these charts. So for the sub-seasonal, for example, it mentions the Madden-Julian oscillation. Well, if the Madden-Julian oscillation skill in the model is mostly determined by the convection scheme, and the convection scheme is the same in both of these, why would you expect this skill to jump up? If the interaction with the sea surface temperature is playing a strong role, then you might think that there would be a difference, for example, due to the coupling. So these jumps can only be basically, should we say, achieved if you have different physics in that system, or a different way of initializing that actually takes advantage of a data stream that's not in the other system, and so on. It's not just the case that, yes, the MGO gives you predictability maybe up to week three or so, but you have to think about why you would have that in one system and not the other. So that kind of motivates the talk, and also slots in nicely with last night's dinner conversation. OK. Can you say something about a different way to do modeling? I might be confused, but I feel that you're talking a lot about resolution at the ECMWF approach. Yes, this is purely dynamical, yeah. Other models might be a bit different. So it's not very often that you can see subsistence of forecast as the ECMWF modeling, but that also might be referred to other models that are not ECMWFs. That's right, but I mean, the argument still holds. If you were to, for example, have a statistical system that would take advantage of aspects of the MGO to actually get this, that's still a different representation in the physics that's given you that gain associated with that phenomenon. I see what you're saying, but I think the points to hold. But this talk is going to be ECMWF focused, and so I just wanted to quickly remind you, I know I sketched this on the Blackboard last week about the system, so I'm starting from the kind of old system, as it was basically a decade ago when I was still there, where we had a deterministic run at the highest resolution that went out for the next, basically 10 days, which was then supplemented by 51 forecasts in an ensemble to sample the uncertainty. You remember that cloud of points I showed on the Blackboard with the evolution of the forecast, and these, of course, are run at lower resolution due to the cost of running 51 forecasts out to a longer period of 15 days. And then down at the bottom here, we had the seasonal system. So Franco was the head of the seasonal forecasting system that was developing the system for. When I was there, it was still the system three at the time that became operational near the period at the beginning. So this is run at an even lower resolution because it used to go out to seven months in advance to make that affordable, extended basically four times a year out to 13 months or one year ahead. And then Frederick Vitao basically slotted what we called at the time the monthly system into that space between the two, where we had basically forecasts going out to 32 days, which were initialized once per week every Thursday, just in time for the weekend to plan your weekend activities. And I think now basically all the S2S systems I hear are starting to converge on Thursday start dates if they're doing burst starts once a week. And they were high cost spanning 18 years. So you know what they are as well for the calibration with five members. Then at a later period, these EPS and the S2S system, the monthly system, were basically combined. They were extended out then basically twice a week up to 48 days with progressive resolution with dynamic and with 20 basically hind casts. And 12, I've got that wrong. It's 11 now, 11 members, 46. Almost there. That's my 5% uncertainty by now. This talks all about uncertainty or whatever. So you always have to put an O in curly brackets around my numbers in talks. So we want to assess the S2S model scale. Just to remind us, the hind cast primary function, of course, is to perform bias correction and output calibration. But it's also extremely useful to look at skill over the period of the hind cast. And a lot of you have been basing your projects on looking at the hind casts in the S2S database. And I'm sure it's been keeping you very busy. Lots of data there to keep a look at. The advantage, of course, is you've got this long period of time that's sampled. But of course, your hind cast ensemble is smaller. And just to remind you as well, I know you've probably seen this already, so I wouldn't spend too much time on this. But the S2S system is actually seen of basically these two ways of doing the hind casts, where we have on-the-fly forecasts and hind casts. So every time you have a start date, the 10th of a month, for example, you run for the previous 20 years, starting on the 10th of the month. So it actually matches the start date. Rather than fixed hind casts where you run a set of hind casts and they stay fixed in time, you don't rerun them. So for the seasonal forecast system, that's what they do for the hind cast. They're basically fixed because of the expense. You can't really afford to run them. So you don't have the match in dates. But you can actually perhaps afford to have a larger ensemble size because you're only running these once, whereas you do tend to try and squeeze things when you're running things on the fly. So the Met Office system, for example, just to say and contrast, I don't know, have you seen this plot already in the workshop? Because I haven't seen. So it's just a contrast to another system where the Met Office, rather than starting once or twice a week, actually have lagged ensembles. So every day they're running four forecasts, who's short, too long. And then they just basically accumulate those over time in terms of these lag starts. So if you have four per day, then in a week you will have 28 forecasts. So the advantage of that is you have a kind of a smooth lead time rather than jumps twice a week when you reinitialize your forecast. But it's not clear necessarily which is the best. On the Thursday when you run the ECMWF forecast, then you've got a larger ensemble of fresher forecasts. OK. So should we expect an improvement of S2S systems over the monthly systems over the seasonal forecast systems? Well, I did a little test of this on a paper that was actually about malaria prediction. And we just looked at the preset. And we only looked at 12 start dates. So you have to take these plots with a pinch of salt. So what I'm showing you on the left is basically I was just looking at the whole month of day 1 to 32 of the monthly forecast. And I'm comparing it to the seasonal forecast. And I just try to take the start date of each month, the first start date in the month, to try and minimize the difference in the start dates. So this is basically just the correlation of the ensemble means. It's not a probabilistic measure. It's just the correlation. And then on the right hand side, for those 12 start dates, I just looked at the difference between the monthly system and the seasonal system. So red is showing improvement in the correlation where the colors going from the yellows are just basically 0.02 in the correlation going up to about 0.2. So where do these gains come from then? We've talked about some things already. Where would you expect these to actually come from? Any ideas? Why would the? So we've seen these different systems here. Why would the monthly or the extended EPS, why would that be better or could it be better than the seasonal system? So there's one there that stands out on this slide. Resolution, very good. So the system's running at a higher resolution. You would hope that would give you some benefit because it's certainly costing you in terms of CPU time. So we expect the resolution to help us. Now, I'm going to bag all of those effects because it's not just resolution. There's also things like an actual setup, maybe the way the ocean's initialized, all those kind of things, which don't necessarily change from model cycle to model cycle. Here's a clue for the next thing. So I'm actually going to call those setup, which includes resolution, but it might also include other things, such as differences in the way the system is initialized, and so on. And those differences might not be necessarily beneficial because you're running the monthly system very frequently on the fly. So some of these differences in setup might be to make the system more efficient and faster to be able to run. And they might actually be detrimental. So some of these might be advantageous and some of them might be detrimental. And I'm going to call those setup. So what's the second reason we might have a difference? Just mentioned it in the last, any ideas? When you're looking at the S2S database, you had to kind of version date, yes? And that's because one thing I didn't mention on this slide is that the seasonal system, because it's so expensive and you have these fixed hind casts as a result of that, usually stays fixed, the framework for many years on the order of five. How long the system for being operational before six years? Six years, OK. So that's kind of a typical turnover time. So seasonal forecast systems are a little bit like the models used for IPCC for the climate change process. They tend to have a lifetime of around five or six years, a little bit like a lifetime for a model for a car or something, whereas the monthly system or the EPS system, just like the high resolution system, that's more on the ball. That's like all the kind of updates, the headlights of the car, the inter-shoe, say, facelifts of the model changes. It's updated more often. Why? Because you want it to take advantage of new satellites that are launched, maybe there's new physics developments you want to try and put in there, and you don't want to wait five years all the time to get those benefits. So these systems here are normally updated at Eastern WF between roughly once to three times a year. It really depends. There's been once where it was perhaps a year and a half cycle when they were going for a resolution change, so they were focusing on that. But in general, these systems have a shelf life of around six months. It's not that they're completely rewritten from scratch. Sometimes the updates will be very minor, perhaps a few tweaks to the parameter settings. Sometimes they can be extremely major, a whole new convection scheme going in, for example, big changes to the radiation or the way the data assimilation works. So I'm going to call cycle differences. And these are distinct from the setup differences, because basically, once you change the cycles, it's not the case necessarily that you would change the resolution. So the different resolutions stay fixed over cycles. But these are basically taken into account new model physics changes, data assimilation changes, and so on. What else is there? Something else that's also on this slide. Why would you expect this system to beat this one? Let's say you wanted to know what the weather was this weekend. I gave you a forecast last Friday. You want to know what the weather is going to be this weekend. And you've got two choices now. I can give you the latest EPS forecast that's just been run on Thursday. So today's Thursday, isn't it? I always lose track. So the forecast that's run today, I can give you that forecast for this weekend's weather. Or I can give you the latest up-to-date seasonal forecast that started on the 1st of October for this weekend's weather. Which one would you prefer to use? The latest one, the EPS. Because basically, the seasonal system starts on the first of each month and is running through. But the EPS, the extended range EPS, the monthly system, I'm going to call it monthly. I know you're not going to call it anymore I don't know, for me, it's still the monthly. I'm old and slow to change. So each Thursday and each Monday, you've got these forecasts running forwards. So they're fresher. So if you want to look at, for example, week three to four, the forecast is much fresher and newer, especially if you're towards the end of the month. At the beginning of the month, there's not much difference between the two. Now, you might say, well, that's unfair. The seasonal forecast started earlier. But if you're a user, that's the information you have on the table, yes? So I'm going to call this the lead time advantage. So if I want to race you on a race thread, but I give him a 10-minute head start, that's what I'm referring to here. He goes off, I have to wait for 10 minutes, and then I go. That's that difference in the forecast start time. So last night, this is as far as my talk got. So we have the lead time advantage in model physics and the setup. Now, what I wanted to do is try and separate these. So you can imagine you've got lots of things going on. This is kind of a sketch. And you can imagine this is each Thursday, and this is a skill game. So this dashed line is like no skill game. So through the month, your lead time advantage is increasing. And then at the beginning of next month, you lose that advantage again. And then it goes up, and then you lose that advantage. But over time, you expect the monthly system to drift higher in its skill game, because the model is being updated all the time. It's getting more efficient in the way it uses the data and so on. So you might expect, as the cycles change, you gradually improve. Plus, this whole thing that you expect an offset maybe due to the setup change. So how do we pull those apart? OK, well, we could try some controlled experiments. Normally, with scientists, we want to investigate something. How does changing A, this parameter, affect my outcome? So you make some experiments, a controlled experiment where we don't change A. And then we run exactly the same experiment where we do change A. And then we compare the two, take lots of cycles, and see what the difference is, lots of experiments. Now, we could do that. But of course, over all the different cycles and all the different setups, we're talking about running an awful lot of forecasts. It's just really computational feasible. Well, what about using the database that's already there? So of course, we've got on the order of 10 to the 5 forecasts if we look at a six, seven-year period. But they all have different start dates and all these cycles. But can we try and just simply pull them apart to try and separate these? So this is what I've tried to do with this comparison. I've used a period from 2008 until 2014. It's prior to the S2S database. And it's a long period where there wasn't a resolution change in the system. So the setup difference will be constant through that period. I'm using five members from the hind casts and the 19 years, 18 plus 1, right through the period, even though the later period actually had a larger number of hind casts. But I don't want to bias the statistics towards one period over another due to having different ensemble sizes. So I always just take the first five members of the perturbed ensemble. I'm simply looking, because it's a small ensemble, therefore I'm simply looking at the correlation of the ensemble mean. And I'm also making an assumption, as a caveat, that my differing hind cast periods are not going to affect my statistics. So there's a caveat in this analysis. Because of course, the early forecasts go from 1990 to 2008, and the later from 96 to 2014. Luckily, I'm not adding in or leaving out any major answer events in that particular by luck. But that's a caveat. I need to think about how I might try and test that, perhaps by trying to reduce down to the common period, just for a small test. But of course, you don't want to do that too much, because you're cutting out years then. And I'm making a like for like comparison. So if this is my EPS forecast on the top, I'm going to be focusing, because your exercises in week one were looking at week three to four. So I said, OK, I'll do the same. So I set my script up to extract day 15 to 28, just like you did last week in the lab. And I take exactly those same days from the most recent seasonal forecast. So the red arrow on the bottom is showing you the lead time advantage. It's the difference between the two start dates, day of the month. I lump in 31st of the month was the 30th, because just a very few 31st of the month, and it was a very small slot. So what does the skill just look like when I looked at this week four, three to four correlation for system four? Now remember, this is not week three to four of the forecast. It's all of the three to four weeks taken from the seasonal system. So when I look at, for example, the first of October run, if there was a monthly forecast starting on the third, I'm picking up that period of week three to four. And then a week later, there's maybe a monthly forecast starting on the 10th, and I'm picking up that week three to four period, and all through the hind cast. And so you can see the kind of plot that you would, I'm sure you've already seen this week, where you have a very high correlation over the ENSO region, and in the tropics, it's much lower over the LAN masses. OK. If we look at the seasonal, sorry, yeah. Is this two meter temperature? Yeah. Do I forget to say that? Yeah. That's quite important. Yeah, this is two meter temperature. This is what happens when you write your slides right before the talk. I did want to add that there. I'm looking at two meter temperature because I only downloaded, there was a lot of downloading. I only downloaded the precipitation of two meter temperature at the beginning. Precipitation has a much lower skill already in week three to four. So I decided to go to T2M, which is also a parameter that's very important for impacts. Apologies for that. It's T2M. OK. Thank you. So this is T2M. Whenever Franco speaks, I always get this kind of sudden panic attack that I've missed something completely obvious and that there's something really wrong in the slide. I don't know what it is. You're fine. The knee shake, yes. You didn't pay attention to the knee shake. And this is the week three to four correlation for the monthly system. If we take the difference, you can see that the biggest gain is essentially in the extra tropics. And again, I've used a slightly nonlinear scale. So we're looking at about an increase in correlation with the oranges of about 0.1 to 0.2. So I've only got a few more slides and I'm going to basically try and break this up and the analysis, because it was done this morning, is a little bit crude. I'm going to break it up into these three. But I thought it'd be fun to actually have audience participation, I thought, rather than me just saying, OK, here's a real dinner, lunch, or whatever. So we're going to basically have a survey. So remember, set up cycle differences, model physics, lead time advantage. Does everybody understand these three differences? And we're going to look at two areas. An average over the tropics, which I think I actually defined as from minus 20 to plus 20. And then northern hemisphere, extra tropics, which was basically, I did 30 north up to the pole, right around the zone of things. And I want you to think about it for a second. And I want you to rank these three effects, 1, 2, and 3. And we'll just have a show of hands for the two. So have a little think about it for a second. You've got your resolution increases in the EPS, OK? This is coming on now. Then you have the cycle differences, model physics, all those convection schemes, changes, and radiation. And then we have the lead time advantage. So how many of you, let's put it like this, we'll say set up model physics and lead time advantage. How many of you would rank the set up as top? The model resolution is giving you the biggest gain. I'm going to look. If you don't put a hand up at the end of this, you don't get your lunch, OK? Our meal vouchers are now electronic, OK? I just press a button, and immediately it's not valid anymore, OK? I'm telling you now, you'll be up there in the queue, and then you're going to pay an assumption. Sorry, that's 500 euros, OK? So we have one for the set up resolution. What about the model physics? Who thinks the model physics is the most important of the tropics? One, two, three, four, five, six, seven, eight, nine, 10. Interesting. What about the lead time gain? So you've got an average lead time gain of about 15 days. One, two, three. There's a lot of people that didn't put a hand up. OK, for the highest. What about the, OK, we'll do that. Now let's do the same for the extra tropics. Franke didn't put a hand up. You're staying on it. No, no, no. Now I want votes from the lecturers as well, especially. No, no, no, no, lecturers as well. What about you three of them as well? This is the model physics. Physics? Well, you said the tropic lead time is 10 to 15 days. No, it's 15. You've got 30 days in a month. So if you assume that Thursday falls randomly in your week, which I think it does. I haven't worked it out, actually. What's the harmonic of seven over 365? I think you've got roughly the same sample size through the month. So assuming that your lead time advantage is roughly 14 days. And then I do it for a lead time. OK, so we have a lead time. Me too, yeah. Lead time. You just said that because Franke said that. Yeah, I just said that. And I'm just saying that. I know it's for some reason. I was always like, where the hell are you going? And we had model physics in number two. OK, interesting. What about the extra tropics then? So now, set up. Resolution increases. Nobody at all. This is interesting, because I mean, from all of these, this is the one where you're spending the most on always updating the resolution. What about model physics? Three. And lead time? Everybody. 328. So let's have a look, showy. And as I said, I want to improve the way this analysis is done. So first of all, I'm just going to show an animation. I forgot I popped this in, actually, just to show you how this over-leaked. It doesn't have the lead time in the title. How it builds up at the time. You can see a little bit of jumping around in it, which is probably some of the sampling on the days of the month. But let's have a look. OK, so this is the lead time advantage plot first. So on the x-axis, this is my extra lead time I've got by running my forecast more frequently. My y-axis is my gain. So there's a little bit of scatter, as you'd expect. But you can see a strong trend as a function of lead time. And this is for the tropics. So the first thing we notice is when we don't have any lead time advantage, the skill gain is near to zero. We also see that it's still not saturated by month one. And that's good, because it's showing that we've still got skill in the system. If both the system were basically a climatology by towards the end of the month, then you would just expect this to saturate. But of course, you've got skill in week three to four. So it's not saturating. And the range of this gain in the tropics is about 0.06. Now I've got two other regions here, the northern hemisphere and Africa on the right. And you see the same thing in the northern hemisphere. You have this stronger nonlinearity. I was thinking about that this morning. I'm presuming this is tropics to extra tropics communication. What do you think? You think it's just predictability dropping off? Yeah, actually, probably, no. And then on the right-hand side, we see for Africa as well. Again, that's quite linear. Yeah, actually, it's predictability dropping off. Like I was saying, it's saturating. OK, sorry, a little bit small. So this range is going from 0 to 0.1. So we've actually got a bigger gain compared to the tropics. Yeah, so it would have ended up there. And like I said, it was a little bit of a mad dash. I'd like to put them all on the same plot. But so the tropics was basically linear like this. That's right. It gets more, but then it's saturating. And on the right-hand side, we have Africa, which also has a range similar to the tropics of about 0.06. Interesting, in Africa, you can see that it's quite a negative offset for the very first day. I don't know, Franco, have you got any ideas why that might be over Africa? Could that be something to do with the way the soil moisture is initialized in the two systems over land? Because you don't see that. Well, yeah, you are comparing system 4, system 4, and the land on the model grid. Right. Why the S2S in that project. Yes. So it could be something to dig deeper about. So I've got a feeling it's something to do with the land initialization. And the most likely thing that came to my mind this morning was possibly the soil moisture initialization. But anyway, I thought that was quite interesting. But it's only for the very first day for the zero lead. And then the lead basically regains that. Now, this is the model physics advantage. Excuse me for the scales, because I try to plot these quickly on all the same scale. But I think it's fairly clear. This is 0.04, 0.08. I just say cycle number here. But I label with the cycle above each one. So these are the model's physics cycles. So for this six-year period, we basically had 10 cycles. And the blue box here marks the S system 4 cycle. So at this particular period, around 2010, the two systems were actually using the same cycle, whereas prior to this, the monthly system was actually on an older cycle. And here it's on a newer cycle. And what you see is in the tropics, it's just bumping around as not any kind of clear gain. And you see a jump up with cycle 37R3. That jump is about 0.03. And basically, there were two changes. There were some tweaks to the cloud microphysics. But I think this is probably associated with the way the convection scheme defined its entrainment in the updraft mixing. And there's a paper by Peter Bechthold and colleagues. Say again? No, exactly. So if you remember the setup at the beginning for the experiment, I chose this period precisely because there were no setup changes in terms of the resolution. Now, one point that might be thinking is in terms of the initialization. There might have been tweaks to some of the, but I'm counting that as physics changes. So if the data simulation changes for the soil moisture? Yeah, that basically. So there is a little bit of murkiness of how you basically define these. But in terms of the key one of resolution, it's fixed through that period. Yeah, so the 40 var changes are all in the cycle differences. So 40 var is changing very frequently in terms of the, because data simulation can't really be extracted from the model physics because you've got the outer loops, which are nonlinear and so on, so the data simulation system is dependent on the model physics. So any changes to the initialization there, essentially? So maybe in fact, in this explanation, this is a little bit fuzzy, actually. That really should be in here. It's a good point, thank you. But in terms of the resolution, there are no changes. Because just after here, there was a resolution update. So you would expect a step there. But I thought that confused the picture somewhat. So you can see that there's a jump of about 0.03. But there's no systematic gain over time. You just get certain cycles that jump up. Perhaps not quite what you would expect. In the northern Hemisphere and Africa, I was actually quite surprised at this. I don't mean there's not really any systematic change of the cycles. Are you surprised? No? This is 2 meter temperature, I know. So you have to remember, this is why I specified here in 2 meter temperature. Because of course, the plots that are used, the metrics that are used to assess the thing, the short range and the medium range often are things in the atmosphere, such as the geopotential height, anomaly correlation, winds at certain level, of course. So this feature is less. But I was a little bit surprised that there wasn't. OK, I was thinking in the short lead that there would be, no, not at all. I would have thought. I mean, I don't know. Because I mean, there would have been these talks about tropical, extra-tropical telecommunication. I thought this was the whole idea of this workshop. So if Lara's saying that that doesn't work at all, then to be honest, you can all go home now and forget about tomorrow's. So I was naively thinking that all aspects of the physics would slowly improve over time with these changes. Well, there you have it. Then you can close down the physical aspects section as well. No, joking, joking, joking. No, I must admit, I was a little bit surprised. I wasn't expecting a strong influence, but I thought there might be something in there. And I didn't see anything. Cycle 35 R2 jumps out. I remember that was an absolute turkey at the time. If you remember. So I mean, it does kind of correlate with what you expect. OK. So how do the contributions compare? I want to do this more robustly. So I'm finishing off now because I'm running over time now. So this was that lead time advantage plot. So I'm just doing something really, really crude here to estimate these things. So basically, over the lead time advantage, we're looking at the data. So I'm not worried about the absolute values. You can see that if we take in the tropics, the lead time advantage gain for the average lead time, so assuming that the days fall equally through the months, is on the order of 0.03. Now for the model cycles, we had about a 0.03 model cycle advantage as well. So it's actually fairly comparable to the lead time advantage. I was surprised because I thought this would actually still dominate. So for me, that was a little bit of a surprise. And we can actually pull off what we expect the setup changes to be. Because if we look at this particular cycle, this is about 0.055. Like I said, I'm only reading it from the graph. But of course, this has got an average lead time advantage of about two weeks because the days are spread over. So we have to remove the lead time advantage from this to get you the setup advantage, which is on the order of about 0.025. So it seems that in the tropics, averaged across these analysis of all the 30,000 forecasts, that the three aspects actually fairly equally contribute to the skill gain. It's not an outcome I was necessarily expecting. So what about the extra tropics? When we do that, well, we've seen already that the model cycle really isn't doing very much at all. And we can see that the lead advantage is slightly larger than basically the cycle advantage. Because we've got about 0.13 here and 0.07 there. So it's slightly, again, there's similar order of magnitude. But the northern hemisphere analysis seems to show that the lead time slightly winds out. So again, I was slightly surprised at those two being fairly similar. So that's what I've done for this small analysis. Like I said, it's a little bit crude, the comparisons at the end. But I've essentially sub-sampled and combined 30,000 forecasts and forecasts to try and pull apart basically these three contributions. In the tropics, it seems that over this six-year period I've chosen, where the resolution is not changing, that basically all three are fairly equal. Whereas in the northern hemisphere, extra tropics lead and setup dominates you with lead slightly larger. And so the next steps is I just want to do a slightly more robust contribution of actually fitting a spline through that lead time and using that to actually subtract the lead gain for each of the individual forecasts. And then also, perhaps, extend the analysis to winds as well. Anyway, that's what I've done. I'll finish there. Thank you.