 Felly mae'r rai o'r hobeidwyr hwnnw, byddwn i'n mynd i'r holliau, ond mae'r holliau a'r holliau yn gweld cymryd. Aethwn i'n meddwl o'r holliau i ddechrau'r holliau a'r hollau i ddweud i ddweudio. Rwy'n golygu i'r hollau o hefwau gwagid. Nid oedd o hefwau gwagid yn byw i mewn i'r holliau i digwydd, but Hannah and also Rachel Lowe are co-authoring a book chapter on S2S health applications at the moment. So I wanted to show a little bit of their material as well on heat waves, but forgive me if I'm not that up to speed on that particular subject. Okay, so I thought I would start with just focusing our minds on the definition of early warning systems. This is a slide from the World Health Organisation in 2008 where they were really outlining that we are designed to set up systems that basically give prior warning of events. Now, this seems really obvious for us because we're all working in meteorology and atmosphere and so on. We're used to forecasting, but in the health side of things, in fact, for most, should we say health implications, most health outcomes, very much the status of how people work is in terms of just surveillance and reaction. So in many diseases and many health outcomes, the whole mindset and paradigm of actually predicting events in advance and moving your resources into place in advance is actually quite new and you'll find that when you speak to many health workers and so on, it's a very new concept for them. So what I'm going to talk about is a little bit of, again, why we might want to use S2S, sub-seasonal seasonal, should we say, systems, who had a very nice introduction from Charles. And I want to just briefly go through two case studies on heat waves and malaria. So I'm not caught tonight enough to keep an eye on the time because I have a few slides on both of those. OK. So Charles was very nicely introduced. I left this in for completeness about the different time scales and, should we say, lead times of the sub-seasonal. So I can skip safely over that. You would know now what S2S systems actually address. But I thought it might be helpful just to leave this graphical overview of where the S2S system fits in to the forecasting pie using ECMWF as an example again. It's the system I'm most familiar with. So, of course, they have their high-resolution deterministic runs originally. Then they were supplemented at one point by the ensemble runs, which went out to 15 days, had many members so you could assess the uncertainty of the forecast. And then as Franco introduced earlier on in the school, these were supplemented after the late 90s by basically a seasonal system that predicted each month out to seven months and four times a year was extended to 13 months. And then in the late 2000s, Frederick Vita set up basically a monthly system that basically sat in the middle of these twos. It filled that gap between those twos. It ran out at that time to 32 days and it was running once per week every Thursday so you could plan your weekend barbecue and so on. It fitted nicely at that time. So that was an intermediate resolution between the EPS and high resolution and the seasonal. In the meantime, that's essentially been integrated into the ensemble prediction system. So, like the seasonal system is extended out to 13 months, we saw from Charles's table that the S2S system is actually an extension of the EPS out to 48 days running basically twice a week now every Monday and Thursday. Just to, on that table, on Charles's table, there was a mentioning of the hindcast just to emphasise that one of the key differences between these is that the hindcast period of running for the same date for the previous 20 years in order to be able to calibrate the model is done how we say on the fly. So every time you make a forecast, there's a set of hindcasts whereas the seasonal system at the moment is using a fixed hindcast period. I don't remember from the system 5 talk you gave whether that's still going to be the case for system 5. It's still the fixed period. So, just to emphasise that, if these are your operational now forecast in red, you have a set of hindcasts that basically move with the system. That's good if the model is being updated frequently, you need to do that, the model version. Whereas the system 4 and system 5 models tend to be fixed at a certain model version for a longer period of time and then can use a fixed hindcast forecast speed, hindcast speed, which of course is cheaper because you only run it once. That's the main reason why these are fixed. Not all systems do that. I'm going to skip over this very quickly, but for example, the Met Office system has lagged starts and they actually run the hindcast suite for both the seasonal and the S2S system. Just to emphasise the difference. So why might you use S2S? Well, to give you an example, this is just for the operational forecast in 2012 looking at all the hindcast suite and it's an extremely simple statistic. I'm just taking the correlation of the week 1 to 4 temperatures of the ensemble mean. So there's no probabilistic verification here and it's just looking for the whole of 2012 what the mean over that year of the skill is in the S2S system. On the right hand side, it's the gain in that correlation compared to the same days in the seasonal forecast and this is for the first Thursday of each month. So why would you get a gain in skill compared to the seasonal system where there are three key reasons. One is the lead time advantage. Even though I'm picking the first Thursday, then it's not going to be the first of the month and the seasonal forecast starts at the first of the month. So you have a lead time advantage. So that's one of the reasons why and the key reason why you'd want to use S2S in health applications, you simply have more frequent updates through the month. So you can slot those in at the start of your, shall we say, climate information. But also recall that the ECMWF sub-seasonal system is being updated with each new operational version. So once, twice, three times a year there's a new forecast system with new physics upgrades and so on that are incorporated into the monthly system which you have to wait a little longer to actually have those benefits in the seasonal system. And also the framework, so for example the S2S as I just pointed out is a high resolution system so you might have some advantages there. So at the moment I'm trying to split those three effects up. I wanted to show this because I find variations of this plot are often shown when people talk about S2S, this is from my colleagues at IRI and I do think actually it's a little bit misleading. So you often see this kind of plot where you say well this is the weather forecast range, this is like lead time, here the seasonal and this is the S2S and then this is some kind of schematic of skill. And you often see this kind of plot where you say well okay it's sub-seasonal, they and then they point out that you have for example Madangillian oscillation giving you predictability on these kinds of time scales and so on and you see this kind of jump. And I feel this is a little bit misleading, that's why I wanted to point this out in that you would only expect to have an advantage over the short range for example high resolution system if there were an aspect of physics that was represented in this system that was particularly tuned or added to represent these effects. So to give you an example the seasonal forecast system you might expect that to have an advantage over the deterministic if this were to run out to two months because it's coupled to the ocean and the very short range deterministic, if I'm not wrong and it hasn't changed in the last couple of years, is still basically having fixed SSTs. So you are modelling a process that's evolving in time over these time scales that's not in this system. On this system here for example, yes the Madangillian oscillation might give you predictability in the tropics at a two month time scale but it is not that this has a convection scheme that's tuned to improve the Madangillian representation in this system and not in this. They use the same convection scheme setup. So this system here should represent the Madangillian oscillation equally or it's not better because of the higher resolution. So I just wanted to emphasise that you have to take these schematics for a little bit of a pinch of salt. There's several variations of these. Anyway, that's my introduction. Now I want to talk about how for outcomes, applications in health this list could go on forever. Health of course is impacted, climate impacts health through nutrition, heat waves, weather extremes, menococcal and meningitis. There's a group working on the merit project in Dijon. Cholera, vector-borne diseases, there's a whole long list here. There's not even room to fit them on the slide. I'm just going to introduce two examples of heat waves and malaria. To point out that although we're talking about S2S, so two weeks to basically eight weeks lead times in terms of the meteorology, when it comes to the health application, that lead time will vary tremendously depending on what kind of application. So for example, for something like heat waves or weather extremes, that impact is almost immediate. If we talk about vector-borne disease, we'll see that there's a kind of a spin up time for the system. Actually, a two-month weather forecast will actually perhaps give you information out to four months due to the inherent lag in the system. So the health application, should we say advanced warning, really would depend. And how that fits into the decision process really would depend on the disease. So there's a study for example in Riff Valley fever that shows that even a three-month accurate forecast wouldn't be that much use because the main intervention in terms of vaccination needs longer advanced warning. There are not the stocks that's a short shelf life, access to the field in the rainy season is difficult and so on. So how is climate actually used in, should we say, health applications? Well, so far in terms of forecasting, as I mentioned, not that often. It's often used for example in mapping. So if we take malaria again as an example, this map on the right is indicating countries that in the green or red incorporate malaria in terms of a mean risk map within their country. So they know for example where malaria incidence might be high or low, using climate to try and basically help them in that assessment when they may not have necessarily good local district data. So that was by a paper by a mumble. In terms of actually forecasting, while there are several approaches, and I'm going to go through a couple of examples as I said, it's very rare to see these operationalized. So most of these examples you'll find are essentially pilot or demonstration projects, as will be the two examples I show today. Where climate can be either used directly, as in the case of heat waves, or it might be used to drive simple statistical models or simple indices, derived simple indices of health outcomes, or perhaps to drive more complex dynamical models. I want to emphasize at the end of the talk that mapping this model outcome into a decision entry point is extremely challenging. So the first case study will be on heat waves, which at the moment you see it are very, very topical, but of course they've always been around. There's a lot of talk now about actually trying to attribute heat wave probabilities to, for example, the global warming. There's a lot of good work going down in Exeter and so on, actually trying to attribute global warming percentages to heat waves, but we're concentrating on S2S timescales. So first of all, heat stress is more than just temperature. So there are a lot of factors at play. Cloving is your exposure. There's the humidity and so on. And so people have actually built complex models to actually try and represent this. So there is a model that was derived and is sometimes used in Germany that tries to account for air temperature, radiation, humidity and wind. They actually have a cloving model, an exposure model, and they try and work out, for example, for different occupations if you're a traffic policeman standing outside in the street compared to somebody working in a factory and so on, what the differences may be in the actual heat stress in different occupations and exposure locations. But of course these kind of models need a lot of small-scale information to do a general index, but this kind of complexity is extremely difficult. So usually heat wave prediction relies on very simple indices. Now, there's a building away array of these indices. You'll find some of them are just based on temperatures. I don't expect you to read the table. This is just one example. There's a whole host of these type of papers in the literature that into compare four or 10 or 50. There was even one I saw. I couldn't find it yesterday again. I wish I'd noted it down. There are variations of heat indices and often they'll inter-compare them trying to evaluate them, for example, with mortality data for a particular location. Usually they come to the conclusion that a modification of one of the indices is actually the best and so they add a new one to the pile. So I'm going to go through some of the aspects of these, but they're normally based on some combination of humidity, temperature that's integrated in time. So I'll explain that now. So a lot of them basically are some kind of variation on a kind of feels like temperature. They will take a temperature on a relative humidity, maybe incorporate wind, and try to actually derive a basically a feels like exposure indices. Now, the actual details change in terms of the time that this index is maybe integrated before and a warning is actually emitted. The other problem is, even if you derive one of these, it's often very, it is very location-specific. So if you actually look at the statistical association between mortality and temperature in somewhere like London, you'll find that the mortality starts to basically curve upwards. Once you go above essentially this line here, I think this is the upper 10 percentile or the 5 percentile around 22 degrees, which incidentally, so it happens also to be the maximum temperature that my mother can actually endure before she starts continually complaining. So I think this graph is exactly correct. On the other hand, if you go to somewhere like Taiwan or Trieste, this threshold where basically mortality starts to kick upwards is in the low 30s. And so again, that would be probably similar somewhere in this climate here. So there's a lot of adaptation and also filtering. And again, it shows the same thing in Bangladesh. In addition to this, there's also short-term acclimatisation. So for example, in Germany's heat system, they actually use a running mean looking at the anomaly compared to the last 30 days. And this is to account for short-term acclimatisation, a sudden heat wave right at the beginning of the year. Our bodies are not actually used to it there. We haven't undergone acclimatisation. It's actually more dangerous than towards the end of the season. So this is showing how the thresholds for the extreme strong moderate and slight heat wave actually evolve over the year, going through the year, going from January through to the following January. And that's to account for local acclimatisation. So you can compare, for example, to climatology in that particular month, or you can use this running mean. The other aspect that's often tweaked and all of these different indices is the accumulation. So we're a little bit like plants in this respect. If you have, for example, ground nuts, can withstand 35 degrees for a day or two. But if you have it for five days, then they're in trouble, especially if it's the flowering season. So we're a little bit like ground nuts in some respect. If you have an integration of a high temperature over time, then there's a much larger impact. So you can see here that this is actually mortality data compared to T-max. And you can see that this event here is not actually hotter than earlier events. But the duration is much longer. And that actually then means that, sadly, there's a much larger impact of that. So a lot of the tweaking of these indices comes down also to the temporal integration of the exceedence of a threshold. OK. So again, for completeness, I left this table in there. And it's showing a number of countries going all the way through from Belgium right through to USA at the bottom here. And it's showing that they mostly use simple indices based on most of them just on temperature. Some use other variables which are highlighted here, such as humidity and wind speed and so on, to actually incorporate that into that. It feels like temperature. And most places actually have some kind of aspect of the duration. But you can see that there's a different index in nearly every country in terms of this exceedence. OK. So an example heat wave system from France is using the three-day running average of minimum maximum temperatures for each region. There's a seven-day warning that's been set up by Georgia Tech in India. OK. But nearly all of these indices are at short-range, shall we say, time scales. They're basically using the deterministic forecasts. And this is one example where it actually gets out to seven days. Nearly all of them are just for the next two, three, four days using short-range forecasts. So there's been a recent example. This is the only example that's actually specifically using the S2S so far, which was looking at the prediction of the 2003 extreme event in Europe, which I think was the only time when I was living in the UK. I actually had to sleep in my garden during that event. It was that hot, incredibly. So these days I'm kind of used to that temperature now. But I think in Reading we got up to about 34, 34.5, which was around 34 at the time. And so Rachel and her collaborators were actually looking at using the S2S forecasts from ECMWF to see if they could extend that, shall we say, warning of the event in August out to longer lead times. So they found that basically using the single daily S2S values, they had an indication of the event out to about day 10, but they concluded that when you get to day 15 and day 18 that essentially that particular event is just a single event was not really predictable. That's the only example so far in the literature of actually using these S2S systems. It's just a single event and it's using the S2S system just on a daily time step. So what we've got to start thinking about is how to basically adapt the way we use these, shall we say, systems integrated over longer time steps in a kind of, shall we say, dovetowed manner where we're not just simply trying to extend the use of the forecast as we would a deterministic system and how that fits into the decision process. Because if you actually look, this was an example from the Red Cross of their own heat health decision making process across timescales, the IRI are very fond of this ready set go way of considering the use of short range S2S and seasonal forecasts. And if you look at the actual decisions that these forecasts would tie into, then you'll find that the short range, this is in place in many of these countries. So this part of the ready set go is already operationalised in terms of using short range forecasts to predict these indices and set warnings, prepare for increase, for example, power demand due to increase in cooling demands and so on. But when you look at the decisions that are made over longer timescales, they don't fit clearly into the way that we're using these daily output from the S2S systems. I wanted to quickly run through another case study which I'm a little bit more familiar with on the malaria side of things. Early warning in malaria actually goes back a surprising way. After the epidemic in 1908, there was a lot of interest in India to actually try and set out prediction systems. And so there was actually a simple statistical model that was published by Gill, it's not Adrian Gill, in the 1920s, which incorporated climate into this simple index. And it was actually used operationally. So India had operational predictions of malaria for the season ahead right through the 1920s to the 1940s. And then, basically, the whole interest in early warning and health in malaria just waned through the 50s and 60s because suddenly the world switched to elimination. That was all based on the fact that we had DDT. So in the 20s to 40s, control was much more in terms of water management, drainage schemes, draining swamps and so on. So it was really control of breathing efforts, screening on huts and houses, of course, where that could be afforded based on the knowledge of the vector system. Malaria basically rebounded when the elimination efforts peteredart in the 1960s. And then research interest accelerated, particularly after the ENSO related outbreaks in the Highlands in 98, 99. So the last 15, 20 years, I've seen a lot of interest in early warning systems in malaria. How does the climate actually impact malaria? So I just included this cartoon to quickly show you. The basic life cycle of the vector is the female emerges from the pupa. She needs a blood mill to get basically proteins to develop her eggs. She lays the eggs and they develop, new adults emerge and so on. So basically the development time of the larvae is dependent on water temperature. The egg development in the female is also dependent on temperature. If it gets too hot, the mortality rate increases of both the adult and the larvae stage. When she takes the blood mill, if she's basically biting someone who's infected with the parasite, she can acquire that parasite. It then undergoes a life cycle in the vector before she can pass it on to somebody else. And that life cycle, that incubation period, it's basically the sporagonic cycle it's known as, is also highly temperature dependent. So the warmest is the faster the cycle spin, but if it gets too warm, the mortality of the vector increases. So you get a sweet spot, so to speak, in transmission. Just to give an example, this is that sporagonic cycle length as a function of temperature, so you can see it's dropping as the system gets faster, but the mortality rate also increases, so the survivability drops off as a function of temperature, and putting those two effects together, you get this kind of behavior where your maximum probability of a vector acquiring the parasite and passing on peaks at the low 30s. The exact nature of that curve really depends on the details, particularly of the larvae cycle. This is just two effects. So you can see that as well as rainfall, which provides the breeding sites for the vector, that if you're somewhere here on the curve, the transmission in temperature can spin up this cycle and increase the intensity of the transmission or actually reduce it. So temperature and precipitation are going to be the two key variables that we want to predict. This is just to show how this is the rainfall and this is malaria transmission, and you can see that there's a lag of one to two months, and that lag length will also depend on temperature as well. So if you just monitor rainfall and the intensity of rainfall, if you assume that rainfall relates to breeding sites and it's not a linear relationship, you might have some chance of predicting the intensity of the season just by monitoring rainfall. Adding S2S systems in there, you hope having the predictability and the skill. If you have it out to week three, week four, you might add another month to the lead time available, which was shown in this cartoon by Da Silva and colleagues basically back in 2004. It's saying if you have case surveillance, you only know about an outbreak in a highland area after it's occurred. If you can actually monitor rainfall, you have some one to two months possible lead time, and then seasonal climate forecasts in that context could actually extend that lead time. The relationship is very non-linear due to flashing, but I'm going to basically skip over those. Again, I talked about decision entry points, how we actually fight malaria. There's a whole host of basically should we say adaptation and short timescale interventions that one can actually apply, so for example land management, health care structure and so on. That was the emphasis in the early part of the 20th century. Everything switched to basically DDT in the 50s and 60s. At the moment, the emphasis is on basically bed net distribution and indoor residual spraying, so this is something less invasive than spraying of breeding sites and more effective. So these are basically decisions, particularly the indoor residual spraying, where you might want to adapt the timing of a spraying because this is normally only a season length that it lasts somewhere like the north of Ghana where the onset can change quite considerably and so on. I'm going to skip over this. So early examples of early warning systems, well one of the canonical studies was actually in 2006 by Madeline Thompson and colleagues also from ESMWF that were using and many others as well using Demeter forecast to drive a simple statistical model for all country malaria in Botswana. It's a very famous example. If you actually look at the paper though it only actually uses validation of malaria cases for two separate years. So it's massively cited but the main message in that paper is that the rainfall was predictable in Botswana up to four months ahead but the actual validation against the case data was again extremely limited. It's a highly cited paper but it doesn't have a multi-year validation against cases. The Liverpool model is also being used to look at the potential predictability in Africa and India and anyone is interested I can show some of these results offline or over coffee and rainfall is also being used to drive simple compartmental models which have been calibrated for certain areas particularly for example there's a good example in India by linearity at all and there are no seasonality. So there are very few examples of dynamical model systems that are actually evaluated against case data. So just to show you one very quick example of what we try to do with a dynamical model evaluating it against real case data over basically Uganda. So what we did was we set up the dynamical model calibrated using the S2S system for the first month at this time because this is when the the leap times were actually shorted before it was extended to 48 days and then we supplemented that by the system 4 for month 2-4 and we were initialising the malaria forecast from should we say an analysis of the malaria conditions actually using the interim, the reanalysis to drive the vector model because you don't have real time observations of entomological conditions and so on. Let me skip over that to just show this. So this is basically the lead 1-4 skill and this pie shot is just showing red where temperature is basically skillful, green malaria and blue rainfall and then you have these intersections. So you can see at one month lead time we have quite a lot of areas. I'm only showing areas where malaria is epidemic so it's not every year high transmission it's the areas where there's a lot of variability from year to year. You'll find that most of the areas are either white at lead 1 which means temperature and rainfall contributing to the malaria predictability or they are yellow which means that the rainfall skill has actually dropped off as we saw in the previous talk but the temperature variability remember that curve is still giving you some skill in the malaria. As we move to the right the plots turn to the right as we move to the right the plots turn green which means the malaria predictions are still skillful this is against the reanalysis of the malaria until it turns black. So the use of the initialisation and the S2S basically pushes us forward rather than having just a one month lead time we can get out to month 2-3 and my VPN has dropped off. So then on to quickly showing the last couple of minutes is then we've made a considerable effort to use both confirmed Sentinel site data at six sites in Uganda as well as district data to try and evaluate the system. So this is work to slid in progress that we're writing up now and it's looking and you'll see straight away one of the problems is with the Sentinel site data the data sets are very short so we only have a span of about five or six years depending on the site so what you'll find is what we're looking at here is each point is the forecast that would have been available one, two, three or four months ahead. So we found that the system for example this is Jinja this is Kanunga, I'm not going to go through all of the sites but the system is able to basically predict the year to year season to season anomalies in the actual transmission okay so we're taking out the annual cycle here but sometimes there's a should we say a shift so we're looking at this girl now the same for the district data you'll find that often the district data in some districts it does a very good job then you can go to the district right next door which pretty much is the same altitude same climate, same rainfall and you can get things like this where there's just no correspondent whatsoever so this is again highlighting the difficulties of using health data sets I used to worry at ECMWF about biases of 5% in radios on humidity when you start to work with health data they were really the good old days compared to this so you get very speckled maps when you look at skill as a function of the district as I said so I'm going to skip over these because I'm running out we have tried to do a simple economic assessment I think I'm in my last minute no I'm out, yeah so basically well we try and turn it into a hit-miss cost loss analysis and so you get this kind of plarton we've done this for basically all of the different sites where this is our cost loss rate of actually making an intervention compared to the loss if you don't make the intervention so up here you don't have an economic benefit because the cost of intervention is so high that the forecast needs to be very accurate to have an economic benefit down here the cost of intervention is so low you would just simply always intervene because the forecast has to be again extremely accurate and so we've been working to try and use these because we feel that this perhaps helps to map the forecast performance a little bit better on to a decision point but it's still very difficult how do you actually assign these costs especially when you're talking about a disease that has implications of a mortality I mean it's extremely difficult you can put a bed net cost in or screening cost loss of productivity to do these kind of assessments economic assessments but the error bars are extremely large and of course I'm going to finish almost here that you've got this problem of when you have decisions decisions are not made necessarily on a yes, no black and white we're very used to showing things like bright skills scores and that's where we finish but when it really comes to decisions you have lots of other factors that come into play I mean a famous example is if you give a warning I remember in Geneva there was a flash flood warning it wasn't given and there was a lot of clamour against the local council in Geneva and now every time there's a cloud anywhere on the horizon they give a flash flood warning because they don't want to get blamed but on the other hand I remember down in Giulianova one year they gave a warning of very poor weather and then the council got sued because the tourists didn't turn up they changed their holiday plans because they're coming by car so they're very flexible last minute and so all these kind of decisions I don't know, I don't want to sound too flippant but when you're trying to consider where your forecast entry point comes in all of these aspects have to be basically taken into account I don't have time to talk about the genetic algorithm development so I'm just going to finish there I'll probably just leave this up I've hinted at some of these aspects difficulty in evaluating these early warning systems particularly I think it's quite exciting now this time particularly with the Copernicus moving towards open access to these seasonal forecast S2S gives us the ability to use this massive brilliant dataset of hindcast to actually test out our ideas and then maybe possibly go to the Easting Welfare Council and say look what we can do in Uganda with this system I think it's really amazing we need to be able to use this information as well to shall we say justify even resource allocation even in the malaria elimination phase you can still use climate information and forecast to help you judge how your elimination is actually being gaged I will just finish off there we can talk about it afterwards on the publicity some of the work that's been shown here is contributing to a new book on S2S which should appear next year Andy Robinson and Frederic at ESMWF and also just to highlight if you're interested in the S2S system and time scales there are a couple of workshops coming up one on S2S and teleconnections that Fred is also organizing in October and I think the application is still open until next month there's one in Rwanda so if anybody is from that region here in East Africa in Kigali so running there and it's possible it's not confirmed yet there may be one next year in Paraguay as well for our South American colleagues in the room might be interested in actually keeping an eye open for that so I'll stop there sorry I've overrun slightly