 OK, so I will give this introduction of the sub-seasonal to seasonal prediction project. So, I'm Frédéric Vita. I work at the European Center for medium-range weather forecast. And, as Adrian said, I have been working there for quite a long time, 17 years. When I started there, I started to work on seasonal forecasting. I did my PhD on tropical and seasonal forecasting systems. And I spent a few years working on seasonal forecasting there. And then, at that time, the Center was doing medium-range forecast and seasonal forecast. And there was a need to fill the gap between the two, which is the history of S2S2. And that's where we started to design a system for seasonal prediction, which started to be alive in 2004. And since 2004, I've been working on this system and on predictability, particularly on this time range, such as this time range. So, I am co-chair. And over the recent years, I've been asked to co-chair a new project for WAP, which involves WAP and WCAP. And I am co-chairing this project with Andrew Robertson. And so I don't know if you are all familiar with the structure of WAP, but you have these two big programs of research programs . One is the World Weather Research Program, which is usually dealing more with NWP, which is a short medium-range forecast. And then you have WCAP, which is more the World Climate Research Program, which is more for seasonal forecasting. And this new project, this new prediction project, makes a bit of a link between the two, which is why we are under WMO, WAP and WMO, WCAP. So, I will be, as Adrian mentioned earlier, I will be here for one week. So, if you have any questions during this week, you are free. I will be happy to answer your questions. On the next week, following week, Andrew Robertson, the co-chair, will be here. So, he will be here to answer all your questions regarding S2S. So, this presentation will have two parts. The first part, I will go a bit more into the do-we-want-to-do-substantial prediction project. Why is this project started, basically? What are the justifications for this project? On the second part, I will go more into the project itself. So, the time range of substantial to seasonal is, of S2S, is a time range we are targeting, is a time range which is more than two weeks, so beyond the medium range, but less than a season. So, there is this time range between two weeks and basically about 90 days, because the term S2S is a bit confusing because there is a word seasonal. We don't go into seasonal. We focus on the first season. On the type of phenomena we are targeting, which button is for the top one? Ah, OK, that's it. So, we have synoptic systems which are more for day to weeks. On the phenomena we are really targeting are more the iso-increcedon-lossiation, teleconnections, stationary-rosby-waves, blocking. And those are monsoons which should fit into this time scale, and so will be more into the seasonal time range. So, first of all, there is a very large demand of forecast as this time range for applications. And there is quite a large number of those applications we are requiring, actually, for seasonal forecast, interested at forecast as this time range. One is for the warning of likelihood of severe high impact weather for droughts, flooding, windstorms to help protect life on property, for humanitarian planning on response to disasters, for agriculture, particularly in developing countries, for example, for wheat and rice production. For disease planning control, for example, a good example is malaria, which is on dengue, which are sensitive to precipitation temperatures, or if we can provide some forecast as this time range, that would be quite useful. For meningitis, which is very sensitive to winds. And so for hydrology, for river flow, for flood prediction, hydroelectric, poor generation, on reservoir management, for example. So, there is all a large category of application which really requests forecast as this time range. And what is interesting to us is that forecast as this time range, weather and climate, can span a continuum of time scale. So, those forecasts from seasonal, sub-seasonal and medium range can be all used in the continuum time frame because all of them are useful for application in different ways. For example, for agriculture, seasonal forecasting can be very important to inform the crop content choice for a farmer. If he has a seasonal forecast, he can make a decision on which crop he will plant for the next season. A sub-seasonal forecast, then, can be helpful for him to decide my decision on irrigation, scheduling for pesticide, fertiliser application. So, all can make a crop in the calendar dynamic. And the situation on the situation with seasonal forecasting are already in use. Sub-seasonal forecasts can be used as a more frequent update, such as for the end of season for crop yields. On the situation where seasonal forecasts may be especially important, those are situations where seasonal forecasts don't have much, where seasonal predictability is weak, where seasonal forecasts don't have much skill, such as Indian summer monsoons, where seasonal forecast can be useful to predict the onset of monsoons, but also period of dry and wet periods within the monsoon season. So, here is an example of the use of forecasts in the different, you know, multiple timescales way. And that's an example from Red Cross IRI, which is called the Ready Set Go system. So, Ready, for example, would be more seasonal forecast. So, that's a long-range forecast where you can start to train volunteers, try to sensitize community. And then you go to the set, which is more where sub-seasonal S to S lies, which would be more to start to alert volunteers, to start to warn communities. So, you have a better idea of the event coming on. On the go is where you have the short-medium range forecast, telling you the big event will happen. And then you start to activate volunteers. You start to distribute instructions to the community and you start to evacuate if needed. So, to seasonal forecast can be very useful in this sort of framework, multiple timescale framework, on to make decisions at certain type of decisions. So, here is an example from that Andrew Robertson gave me. He was involved with Indian Med Service, the Indonesian Med Service, on case studies for rice planting in Indramayu in Java Island. So, in this island, those magenta column represent the annual cycle of precipitation, to the maximum in December, January. And you have a dry season, which is more in July, August, September. And the rainy season is long enough to allow people to have their two crop seasons, one from November to April, and the second one to April to August. So, during normal years you have two crop seasons. For example, here in 99,000, the yellow line, you have the first season in maximum December on the second one in May. But during the year of 97, 98 when you have an aninio, the problem is that the rainy season was moved by one, the start of the rainy season was moved by one month, which means that the first the first wise season started one month too late later, which was fine. The first wise season was fine. But then the second rain season started to be moved, pushed toward by one month and hand up into the very dry season, which means that the second season was a bit of a disaster. So, where is the signal forecast? It can be very useful. The signal forecast can be useful to give more up-to-date information on the onset of this season or really the details of this rainy season will evolve. OK, so still on the application side, what application I usually am asking is the availability of very long ANCAS histories, which are needed to develop statistical regression models, most models. So, it is also important to have a very long history of forecast for skill estimation, which is pretty cool for application where we want to know how good your model is or if there is no skill at all before using it. What the application I am also asking is the availability of daily data, which is needed especially for key variables like precipitation, near surface temperature and wind speed. On days of subcourse, they would like, of course, to have data access. OK, so that was from the application point of view. No from the scientific point of view. The main question is, can we deliver still full forecast, useful forecast at this time range? And this is a very important time range. It is very interesting, I would say, time range because it links, as I say, the weather and the climate world. But for a very long time, the answer to this question was probably not. This is a time range which was considered to be very difficult. And one reason people think it was very difficult is because it is not as well established, I would say, as well justified as medium-range forecasting and seasonal forecasting. So medium-range forecasting is typically an atmospheric initial condition problem. Here the problem is to get the best estimate of the initial condition as possible from observations, from data simulation. And then the model here uses a model to propagate the initial condition in time, to predict the evolution of the initial condition in time. And according to Edouard Lawrence, the model keeps the memory of its initial condition for about two weeks. So this type of predictability is sometimes called predictability of the first kind. And that is the medium-range forecasting. Seasonal forecasting is based on completely different physical justification. Here, from the atmospheric point of view, seasonal forecasting is a boundary condition problem. Here, the predictability of the seasonal time scale comes from the fact that the boundary conditions, the sea surface temperature, the land surface, the sea ice, the snow are evolving at a much slower time scale than the atmosphere. So we can predict a month in advance, the sea surface temperature anomaly. We can predict an annual six months in advance. And if we can predict six months in advance and we're talking about the climate in six months' time. So that's just for seasonal forecasting, which is sometimes referred to predictability of the second kind. So here we're in between both. And for seasonal prediction, which starts after two weeks, it was considered that it's a time range which is too long for the atmosphere to still have a memory of its initial conditions. And it's too short for the boundary condition to evolve enough to give you some skill beyond simple persistence. That's why some people call it predictability desert. And so there's not been much activity in this time range. I mean, if you go to the library, we find a pile of books on medium-range forecast and seasonal forecasting. But there has been very little work done on this intermediate time range. Those things have changed quite a bit over the last 20 years, in particular the last decade, thanks to the discovery of some sources of predictability at this time range. You have sea surface temperature, which can vary at this time range. You have a long surface condition, snow, salt, moisture. You have the Madangilla Annunciation. So you will hear a lot about the Madangilla Annunciation over the next two weeks because this will probably recognize as one of the main sources of predictability at this time range. You have also stratospheric variability. You have also atmospheric dynamical processes like sea wave propagation, weather regimes. You have also sea ice cover, although it is difficult here to initialize your model with the sea ice thickness. So there is very little source of predictability, which means that this, we may be able to say something about the weather in the next two, between two weeks and a season. So now I will go briefly over those several sources of predictability. I mean, you will have lectures on each of them over the last two weeks, more in detail. But first of all, for the impact here is an example of the impact of salt and moisture. This is from a paper from Costa Rital in GIL 2011. And this is taken from an experiment called LESI. So this is a multimodal experiment. So they use several varieties of models from ECMWF and SEP, for example, which were initialized in two ways. And so there was two streams. In the first stream, they initialized the model with the best estimate they had of salt and moisture. So it was a realistic salt and moisture initial conditions. In the second set of experiments, they run the exact same models with the same initial condition, except the salt and moisture initial condition were different. Here is this crumble. They took the salt and moisture initial condition from different years. So there were sort of wrong salt and moisture initial conditions. And the third year show is a difference in the potential predictability of precipitation on air temperature to meter temperature between the chloride initialization minus the incorrect initialization of salt and moisture. So yellow-red means that you have a positive impact. Blue means you have a negative impact. So we can see that for salt and moisture, except maybe the red plants in the U.S., there is not much impact, actually, except for the moisture for days 16 to 30 up to day 4660. I forgot to mention that the model was integrated for 60 days. But for two meter temperature, there is quite a very strong impact, actually. We can see in potential predictability, particularly over some regions, particularly in North America, U.S., South America, part of South Africa, and part of Europe. For some other region like North Africa, there is not much impact. And this impact can be seen up to day 60. So this can be quite an important source of predictability at this time range. A second source of predictability, and you will have lecture next week by Steve Wunnos. It's on the impact of student stratospheric warnings. So for those of you who are not familiar with it, it's an event which happened roughly about once, sometimes twice a year, always in wintertime. And what happens is that the strong vortex, which is above the North Pole, split. So this is a time evolution of one student stratospheric warning event which took place in 1979. This was the first one observed by satellite. And you can see the pressure, low pressure system, splitting dramatically to be replaced by high pressure. And during this event, actually the temperature on top of the pole can warm by 20 to 30 degrees. So it's a very dramatic event on the winds, which are usually counterclockwise, change sign, and can rotate clockwise, actually. So it's a very, very dramatic event, which can take about one week or two to happen. And one first question people had is, this is such a dramatic event as a stratosphere, there must be an impact on the troposphere. And there was this famous paper by Baldwin and Newton in science in 2001 where they did a composite of 18 of those events, which are also called weak vortex events. So they take 18 of them and they did a composite of an articulation event. But that's the same if you take geopotential anomalies. So you see this very strong anomaly in the stratosphere, you have 30 kilometers, and you have this sort of propagation up to the lower troposphere. And you see an impact in the lower troposphere up to about 60 days. So this shows that actually those stratospheric events can really impact the weather up to 60 days, which is really the time range of S to S. And another source of predictability probably is the most important in the tropics, in the MGO. And, highly, we give much, quite a detailed presentation about the MGO in this week. Just to mention, so this is an extrapropagation wave, starting in the Indian Ocean providing eastward, crossing the maritime continent and stopping at the deadline. The conversion stops at the deadline, but the MGO can continue since the upper level of the troposphere can do several loops around the globe. So it's a 40 cc de-oceation, which, of course, has a huge impact in the tropics. There is a major source of predictability, source of variability at the subsinotan scale in the tropics, but also it can also impact the extra tropics through Rosby wave propagation, and highly, we've discussed more that in details, which can affect the NAO, which can affect, actually, those MGOs in return. So it has quite a quasi-robol impact. And for subsinotan prediction, probably one of the most important phenomena we want to simulate correctly in our models. And finally, as a justification for the motivation to look at subsinotan prediction, was also the fact that medium-range forecasts have really improved greatly over the last decades. I mean, this is the evolution of a medium-range forecast kids' course from ECMWF over from 1998 until 2015. And this is the day when we reached a correlation that dropped at 0.80%. And there has been a gain of about one day of predictability skill per decade. In some years, like in 2010, there was a very extremely remarkable, actually, winter in 2009, 2010, where actually the predictability, where the skill of 0.6 went way beyond day 10. So there starts to be a sense that there is really some skill beyond day 10, which will start to look beyond the regular 10-days time range. So that's where there was, because of this physical justification for S2S and all of this progress in the skills course, there was a recommendation from WMO CAS. CAS is the Commission of Atmospheric Science. It's one of the very high-level commissions at WMO. And it's recommended in November 2009, WCAP and WAP to set up S2S. So this recommendation was followed in 2010 by a workshop, which was sponsored by WAP, WCAP and Torpex, which took place in Exeter. And in this workshop, one of the main recommendations in this workshop is that a planning group for S2S should be formed. And the mission of this planning group was to write an implementation plan for the S2S project. So in the following September, September 2011, this S2S planning group was formed. We had the first meeting. The S2S in June 2012 was approved through the WMO EC, which is a console, the WMO console. And in 2013, we had the second planning group meeting. And then the final, the implementation plan was finalized in March 2013. And that was the start for the S2S, which an opening ceremony in Jeju Island on S2S lunch workshop on November 2013. And we had the first workshop, natural workshop, at NSET in February 2014. So I don't know if you're familiar with WVIP, but WVIP is a research program. As well as I mentioned earlier, is dealing more with mid-term and short-mid-term forecast. And one of the key activities of WVIP was a project program called Torpex, which started in 2003 and finished in 2013. It was a 10-year project. And when Torpex stopped, the legacy of Torpex, this project Torpex was replaced by three legacy projects. So one of them is PPP, Polar Prediction Project, HIW, which is Ag Impact Weather. The Polar Prediction Project is looking at the Arctic weather for minutes to seasons. HIW is looking at Ag Impact Weather, is focusing more on the short-range forecast, Ag Impact Weather events, more over a few days, and more over urban areas. And the S2S is complete, this sort of puzzle, by looking at the time range between two weeks and a seasons. So there is some, of course, connection between the projects, various projects, between PPP and S2S, for example, the prediction of CIs. The impact of the CIS score on the S2S time range is a common topic between both. For Ag Impact Weather, although S2S is concerned, it is also interested in Ag Impact Weather, S2S can produce an early warning on some of those very severe weather events. And, yeah, those are the three projects. The two projects are actually only WAP projects. S2S is the only one to be under WAP on WCAP. So what is the mission of S2S? The main mission is to improve forecast skill on understanding on the S2S time scale with a special emphasis on Ag Impact Weather events. It is to promote the initiatives, update by operational centers and exploitation by the application community. It is to capitalize on the expertise of weather and climate research communities to address issues of importance to the global framework for climate services. So this is a five-year project in 2013, so we are already two years inside. We have a project office which is hosted at the KMA, the Korean Meteorological Administration, and a NEMA, which is a research branch of KMA in Jeju Island. So the final project is financed by the trust fund. We have a contribution from Australia, USA, and the UK, but it's not much money. It's enough money to organize the student group meetings on maybe one or two workshops. This is a membership, so we have two co-chairs, as I said, with Andrew and myself. We have about nine members coming from various occupational centers. We have also people from university, some ex-official members, and we have also liaison group, people who are from other working groups, like Siwunos, who will be next week with those working for gas, who help actually to interact with other groups. So the main plot, I think, for this morning is this one, which shows the structure of the S2S project, tradition project. So the project is organized around six main topics, the main sub-projects. Each of the sub-projects have a group leader and also a team. The first one is verification on products. One is on extreme weather. One is on Africa. We have a sub-project on monsoons. One on a modern land association. And we have one on teleconnections, which is a more recent one. We did it about one year ago. And then we have a cross-cutting activities, which are research issues, which are of relevance to all six sub-projects. For example, the predictability, teleconnection, ocean-atmosphere coupling, scale interaction, physical processes. Then we have a modeling issues, which are also of relevance to all of them. What is the best way to initialize the models? This includes, for example, the issue of coupled data simulation, ocean-atmosphere initialization. What is the best strategy to generate ensembles? For example, some centers are initializing a model around we will go to that in more detail later this week. By using a burst sampling, you run a large number of ensemble members maybe a few times a week, whereas others are using more lag and stable approach. You run more frequently small ensemble members, and then you combine them. What is the impact of resolution on 1604 class? How important it is. Here we are talking about the resolution. We are talking about horizontal, but also vertical resolution of the atmosphere. And we are talking about the resolution of the atmosphere on the ocean. Ocean-atmosphere coupling, when does it start to be successful? Systematic error? How do they evolve? Or can we reduce them? And what is the value of multimodal combination? I mean, there has been a lot of work done showing that multimodal can be beneficial for seasonal forecasting. Multi-modal combination can produce more skillful, more reliable seasonal forecast. For medium-range forecasting, there will be some work done on TD showing that for medium-range forecasting, some models can still beat the multimodal ensemble. It is not clear if multimodal ensemble is really valuable for this time range. For seasonal S2H within between, it would be interesting to see if multimodal really brings something or not. And there is a need for application. That is one important goal of S2H is also to promote the use of seasonal forecast for applications. We are doing that partly in liaison with CERA, which is a WMO working group on socio-economic research applications. To address some of those questions, most of those projects are relying on S2H database, which is a bit the foundation of this project, which is to collect seasonal forecast from very special centers. And this database, we hope, will help to address some of those scientific questions, the role of initialization and sabotage generation resolution. I will go to that soon after. But first, I want to do a bit more into details for each subproject. Here, first of all, one subproject is the Africa subproject. All the subproject is the one which is the most user oriented, the more application oriented. Here, the main goal of this subproject is to develop skillful forecast on the S2H time scale over Africa, and to encourage the update by national meteorological services on those stakeholder groups. Those objectives are to assess the performance of forecast for five, 40 days ahead using the S2H forecast archive, with a focus on rain day frequency, every rainfall events, dry spell on months soon, onset, cessation date, to develop matrix, but matrix which are useful for farmers and other stakeholder communities is to improve the understanding of the climate modes that drive substantial variability in Africa and their representations in models. And the Africa subproject will work with the post-Africa climate conference, 2013 framework, which is the C4D, actually, an S2H activity is envisaged to be one of the first C4D pilot activities. The MGO subproject, this subproject is working in collaboration with the Whitney MGO task force. And this one is focused on one specific issue with the MGO. As we will see later on this week, most models have a problem to propagate the MGO across the maritime continent. The MGO tends to weaken and die too often in models. It also happens in the real world, some MGO die across the maritime continent, but not as frequently as in models. So, here the goal of this project is to try to understand what is the role of the maritime continent in the propagation of the MGO or it interacts with the MGO propagation. So, here the major objective is to assess the current model simulation fidelity and prediction forecast over the maritime continent across scan scales. And what is the role of multi-scale interaction, topography on sea contrast, on ocean land atmosphere coupling, what is the role of ocean land atmosphere coupling across the maritime continent. And this will use S2S database and also a field experiment which is a year of maritime continent, which is a multinational effort which will take place in 2017 to address some of those objectives. And we will have a workshop in 11-14 April 2016 in Singapore between S2S and the MGO task force. Another project is the extreme weather project. So, here the major objective is to evaluate the predictive skill and predictability of weather regimes on extreme events like droughts, floodings, heat and cold waves, to assess the benefit of multimodal forecasting for extreme events, to improve our understanding of the modulation of extreme weather by climate modes, to look at those exceptional predictions of topical cyclones. And the bigger part of it is also to do test studies based on strong societal impacts. So, one of these test studies was in March 2013, cold wave over Europe. We are also looking now at two other test studies, topical cyclone PAM, which was the strongest in the southern hemisphere last winter. And also the modulation, as you know, variability of precipitation during the US West Coast during the strong 2015 annual event. Then was the Mansun Subprojects. So, here Mansun should have an S. It's not only Indian Mansun. It's also the East Asian Mansun, Australian Mansun, American Mansun. And here the main goal is to develop a set of scientifically and socially relevant intracisional forecast products on metrics to be applicable to all the major Mansun systems. And also to have a test studies of Mansun on sets. So, they were planned to use S2S and RSVHE database to assess the skill of the various Mansuns. And already a compilation of observed Mansuns on set have been produced and are already available from the S2S website. The affiliation subproject, here the major goal is to recommend verification metrics on data sets for assessing the forecast quality of S2S forecast and provide guidance for potential centralized effort for compiling forecast quality of different S2S systems. So, some issues to address with the affiliation of current practice in S2S, the affiliation of user relevant quantities to be verified, provision of guidance of minimal and cost standards, and promotion of S2S for testing and compiling efforts on the evolution or benefit of multimodal approach. And finally, teleconnection, mid-attitude tropics. So, this is a new subproject, which is led by Christian Stan and Haileen. Here the major objective is to better understand S2S in all tropical and stratropical interaction pathways to identify periods on regions of increased predictability. So, that's what we call a forecast of opportunity to improve sub-signal to signal forecast of weather on the climate for applications. So, here issues to be addressed are to understand physical mechanism of tropical and stratropical interaction to develop a new comprehensive estimates of topical data by taking and to identify main error associated with teleconnections. And there is plans to have actually sort of a virtual field campaign for this subproject, which is called the year of tropics mid-attitude interaction and teleconnection, which will take place in mid-2017, mid-2019. So, it's still planned. It's not really accepted yet. And the goal will be to have a better understanding on the prediction of sub-signal to tropical interaction pathways. OK? So, one big role of S2S is also to organize conference on education outreach. So, we are already quite a series of workshops. We have in November of 2013. We have our first workshop at Jeju Island. We have the AGU conference. In February, we have our first international conference at NCEP. Now, just in 2014, we had a session at the WIP Open Science Conference. In June 2015, we had our second workshop, which was dedicated to monsoons. It was organized at Jeju by KMA. And then, we had a training course. Already, it was organized last year with APCC and BUZON. And last November, last few weeks ago, we had another workshop at TCM2F on 16-year-old prediction. And now, we have the training course here at CTP. And December, so we have an AGU session. And we have next workshop, which will be in April 2016. It will be about the maritime continent focusing really on this topic. So, before I go to the database, I just wanted to mention, yes, that we have a web page where you can find all the information on S2S in http S2SPrediction.net. We can get the latest news. Here, for example, the refocal data is not available from ECM2F. You can get documents, all documents about S2S are available. Here, you can have information about subprojects, too. So, you can find the details of each subproject science plan there. And you can also, each subproject has a wiki page. So, you are also invited if you have any comments, if you want to have more information about the subproject activities. You can also send an email to the subproject leaders. The reason of those subprojects is also to help with the large community efforts. So, if you are interested in one of those six topics I mentioned, so please go there and contact people, the chairs, and there will be a group mailing list for future activities. And then there is something about the database, the menu for the database. I will come to that later today. Meetings, let us meet in some people on various links. OK, those are the databases now. So, the last part of this talk will be about the S2S database, which, as I say, is the foundation of the S2S project on the first deliverable of this project. So, first of all, this database is based on a Td. First of all, how many of you know Td? How many of you are familiar with the Td database? One, two, three, four, so not many. So, like Td, this database is, so I don't mention Td too much, but just to say that if you follow the step of another database, which is called Td, actually there will be a presentation Thursday, I think, or Friday, where we will describe more Td. So, Td is a database which is more focusing on medium-range forecasts, forecasts up to 15 days. So, S2S is following the step of Td, going toward 45 to 60 days. So, it's a database of opportunity. You have two types of databases. Some databases are, like, I don't know if you're familiar with Demeter or a lot of you must be with seasonal forecasts, with you have a Demeter and Ensemble. Those are databases where the framework of the database was decided first, and then you contact the centers, and as they look, we need these providers' forecasts starting with 10 members and Ensemble, for example, starting the first of each month. So, all the details of the database were first designed, and then the centers are providing the forecasts. And so, each center has to do specific integrations to provide these databases. A problem with those types of databases is that they get old very quickly, because the models keep evolving. And the models in 2015 are now much more skillful than the model where in 2003 or 2000s when Ensemble or Demeter were set up. So, what we do here is to produce a database of what we call database of opportunity. So, we don't ask the center to do specific runs. We simply ask all the special centers to provide us with what they are running, what they are producing for the traditional forecast, which means that, as we see later, it's a bit an heterogeneous database. Different centers have different protocols, whereas a database like Demeter or Ensemble are very homogeneous. They are all following the same protocol. So, this database here has to us includes daily real-time forecasts, and also a re-forecast, which are necessary to calibrate the return forecast, to assess the skill of the model. So, it's a research database, which means that we do not provide real-time data. We provide real-time data, which is three weeks behind real-time, so that it cannot be used for really operational forecasts. TD is two days behind real-time. For those of you who are familiar with it. Also, models are very different. We see they are all archived on the same grid, which is 1.5 by 1.5-degree grid. This is the grid. They are archived. Not the grid. They are run. Most models are run at a much higher resolution, but this archive contains only data on the 1.5 by 1.5-degree grid, which is roughly the grid you get from actually interim for verification. So, variable archive. We archive about 80 variables, including ocean variables, stratospheric levels, moisture and temperature. We go a bit more in details just after. The data is archived in grid two, but net CDF conversion is planned. There is already a way to convert the data in net CDF. It's not a very speed-dirty way, but work is in progress due to have a much more compliant version of net CDF. The data opened in May 2015. Currently, the data of seven models are available online. We will go more into details about that today. The list of variable archived. We archive variable at upper level fields on a daily basis. Most data are archived on a daily basis. Some are archived four times a day, maximum or minimum to meter temperature. Total precipitation are archived four times a day, six hourly. We will find UVT geopotasial height on 10 pressure levels up to 10 hectopascal, specific humidity up to 200 hectopascal, virtual velocity at 500 hectopascal, PV at 322, 322, and 3k. Single-level fields are archived as instantaneous fields such as ML minceval pressure, surface pressure, lency mask, choreography. And accumulated fields, such as precipitation, snowfall, snow albedo, and so on, plus fluxes. And some variables are archived on daily means, which is the average of four times six hourly, which is for cap, spin temperature, to meter temperature, snow albedo, and so on. So this is the list of variable we can find in this database. And as I say, we are working to extend this list. We plan, actually, to have vertical velocity at all 10 vertical levels, 10 pressure levels. And we plan, also, to add more ocean subsurface variables, like misliadets, surface currents, and heat content in the 300 meters. So the contributed centers to the S2S database. We have 11 data providers. So NCEP, Environment Canada, UKMO, ECMDWF, Meteor France, Italy, Isaac, Russia, Adoligical Med Service from Russia, CMA, China, KMA, Korea, GMA from Japan, and the Bureau of Meteorology. On the data, we have two archiving centers. So ECMDWF and CMA. Both of them have opened their data portal. So I will go to that more in detail later on, but this is our view of the data portal at ECMDWF, where if you look to this web page, then you can choose to select which model you want. We have the seven models here. You can choose the date of the forecast you want, return forecast you want, or you can select a month. Again, I want all the return forecasts for January of a given year. And then you can select the steps and then the parameters, and then you can get your data automatically. But I will go more into details about this letter today. There will be a session with Adrienne on this. So another point I want to make is about the S2S database. Is that, OK, the production center, I find providing S2S with near real-time data on the forecast. And the data portal, as I mentioned, three will be in real-time. That's for the research and application community. But parallel to that, for those of you who are interested in real-time forecast, there is another activity which is taking place and which is organized by the LEED Center, the WMO LEED Center on sub-signal to signal forecast, which is KMA, which is to take a subset of the data in real-time and to try to produce some multimodal combinations on the plan. So it's currently really testing. It's not in production yet. But the idea will be successful to provide WMO users of met services with near real-time, sub-signal to signal forecast. And that's why S2S database will be useful as a way to allow actually this to happen. OK. What is the status of the database? It opened to, access got open to a searcher on 6th of May. So currently, we have data from seven data providers, from ECMWF, NCEP, GMA, Bureau of Meteorology, CMA, Meteor France, HMCR. We should very soon have the one from Italy. So we have near real-time data from 3rd of January 2015 from Doncet, from these centers. And real-time forecast is also available from all those centers. So usage, so it opened six months ago. We have already about 200 users, one of the 30,000 requests, 25 terabytes. Only people are using this database. And the plans are from the end of 2015 to have all even providers. It may not be possible. We probably have two or three more. And the rest will be early January. I think most of the ones which are not there are very in good shape now. The goal is also to have new Oceania variables and also to have a net CDF conversion at some point next year. OK. So here's an example of S2S usage per country. So far we have people who have users from 42 countries. The majority of them come from Europe, US, and China. But I think it's quite nice to see some usage from Indonesia, from Iran, from various countries, from Kenya, Vietnam, and so on. So there seems to be quite a large number of users from other countries. I think there's some users maybe here, actually, in this room. OK. So this plot you may see several times in the next two weeks shows the details of the models which are part of the S2S database. So we have here the 11 models from ECMDUF, MetoFIS, NSEP, Canada, and so on. And this is a time range. So we archive only forecasts up to day 60. Some forecasts stop at day 60. So not all models go to day 60. Some stop at day 32, for instance, or day 34. But at least all of them go at least for one month, and some of them will go for a month, too. So in terms of resolution, there's a lot of difference between models. Some are very high resolution, like ECMDUF-UKMO, about a 50-kilometer resolution. Some other models have a much lower resolution. The lowest one was BOM, which is about 250 kilometers. But all of them are archiving 1.5 by 1.5 degree read, except for those which have a lower resolution rather than 1.5 by 1.5. So ensemble size of frequency. And that's where there is a lot of difference. And this plot, I think, it spreads very well. I would say a bit of confusion between centers on what is the best strategy for this time range. If you look at the TD for medium-wide forecast, here the protocol is about the same for all models. All models start two or four times a day for 15 days. So it's very homogeneous database. For signal forecasting, if you look at the CHFP or the database or ensemble or CHFP, for instance, all models start the first of the month. Whereas here, you have a bit of a mixture of various strategies. Because some models here provide sub-signal forecast from their signal forecasting system. Their sub-signal forecast is simply their signal forecasting system for the first 60 days with more ensemble members or more frequently. Whereas other models for the centers like SMDUF, sub-signal forecasts are provided by an extension of the medium-wide forecasting system. So you have a bit of a mixture of different strategies. So some of them, you have several class of models. First we are run on a weekly basis like SMDUF twice a week with a large ensemble size. And those are burst ensembles. So you produce a forecast less often, but with a very large number of ensemble members. And you have those who are produced on a daily basis like the UK MetoFist with a small ensemble size. So more frequently, but a shorter ensemble size. And for MetoFist, for instance, the sub-signal forecasts are produced by a combination of the past week. Those are seven days times four. They produce 28 ensemble members this way. And then you have MetoFrance, which is once a month because that basically is a signal forecasting system. For reforcast, as you can see, there is also some lot of different strategies. Some reforcasts are fixed. Those are for the models like NSAP, which are using the fixed version of their model for several years. And they provide their reforcasts once for all before the start of their operations. So the reforcasts are produced once for all. And you use, for the time, this CFS version, too, will be used. You will always use the same set of reforcasts. On the other hand, you have other centers like ECMWA for the Environment Canada where the model version is changing continuously several times per year. And the reforcasts, which have to be consistent with the real-time forecast, have to be produced on the fly. So each week you produce a reforcast that will be used to calibrate your forthcoming real-time forecast on the fly. So those are called on the fly. And here the reforcast lengths can also vary a lot from the past 20 years. Some have a very short reforcast period like UKMO and some have a much longer, like BOM, 1981-2013. Here again, the reforcast frequency can be either monthly or daily. And the reforcast size can also vary from sometimes one-ensible member only to a 33 for BOM. So you see it's very heterogeneous, but there are still some commonalities, enough commonalities between those models to be able to look at multimodal forecasting, for instance. For instance, practically all those models, except maybe Mito-France, which is once a month, can be used to produce a real-time forecast every Thursday. And here is an example of multimodal forecast of two-meter temperature anomalies. So here we look at the time range, day 1925. And the forecast start is the Thursday, the Thursday 11 June 2015. So top left panel here is a verification for this week taken from Era Interim. So we take the anomalies from Era Interim between Era Interim for 2015 minus the past, the period 1999 to 2010. So we get this one anomaly over, red means one temperature anomaly, blue means the cold temperature anomalies. And this example was very important for Europe because we had this heat wave during these last weeks of the end of June, which we can see here. And here is a forecast from NCEP, BOM, JMA, ECMDUF, and CMA. So forecast starting from 11 June 2015, the fourth this time range, day 1925. And this is a forecast showing anomalies as an ensemble mean of the real-time forecast, minus the ensemble mean of the modality forecast at the same time range. So, once again, red means positive anomaly, blue means cold anomaly. And the white area represents area where there is no significant difference between the distribution of the real-time forecast and distribution of the model forecast. So we see that for this event, we can compare the five different models. There are some commonalities, all of them as this new signal actually over the west, the east Pacific, some stronger than others. But not many of them actually capture this warm anomaly, which was the main interest here. And so they didn't get it. BOM is not very well located. The CMA didn't get it. We seem to have got it somehow. And the GMA didn't really get it. So here was a, I'm showing another example, which is a very cold 2000 spring 2013 over Europe. In the UK, it was the southern coldest march in the UK since 1910. And it was associated with the negative as a North Atlantic Oceation. So here, again, we look at four models from S2S. This year, I don't put the name of the model. It doesn't really matter. But the top left panel here once again represents the anomalies from Iran and Tehrim. And we see this very strong, the cold wave over Asia, north of Asia, which extends over west Europe and also part of North America. And we see this warm anomaly over these areas. And so this is a forecast here once in the time range of 2632, so fairly high, fairly four weeks in Europe. And once again, we compute the anomalies from each model based on his own climatology. And we can see that actually most models predicted very well as the patterns, predicted some of the patterns fairly well at this time range, 2632. All of them basically predicted a cold anomaly over North, over Asia. The forecast for west Europe was a bit more mixed. Model 1 predicted quite well. Model 2, that didn't go too much to the west. Model 4 actually got a warm anomaly. Model 3 got a neutral signal. So that's the type of products we can already use from this to this database, on which something we train probably tomorrow on the rest of the week to produce. So that's in a nice way already to look at case studies for each of your countries. That's the first step already. Or if you have a very big event in your country, you can see, oh, well, this is a various model predicted one, two, three, four weeks in advance on the compare between the models and see if the multimodal combination was successful or not. Here's another case studies because we can do something more sophisticated than just looking at temperature anomaly. We can also track explicitly tropical cyclones. And the study I did was to track the tropical cyclones from the S2S database in five different models and set the CMDWF, Bureau of Meteorology, GMA and CMA. And here we look. So as I said last March, there was a very strong tropical cyclone called PAM, which had a devastating impact on the island of Vanuatu. So here's the goal of these case studies from the extreme project of S2S was to see if the model give you an indication of an increased probability of an extreme event in every 1, 2, 3, 4 weeks in advance. So for that, we track tropical cyclones in those five different models. And we compute once the tropical cyclones have been tracked in all sensible members, we can compute the density of tropical cyclones in the real-time forecast and the re-forecast and compute the anomaly. So blue-green means that there is less risk of tropical cyclone strike, red means that there is an increased risk of probability of a tropical cyclone strike within 300 kilometers. And this forecast here, day 1925 on day 2218, forecast issues a week after. And this is a multimodal combination of those five models. And this shows that actually on dot represents the position of Vanuatu, that for this time range, those three weeks in advance, the model predicted an increased risk of tropical cyclones, east, tropical cyclones here, which coincide actually with an MGO in phase during, over the west Pacific, which can generate twin tropical cyclones. So here we can see that the models had some predicted actually an increased risk of a landfall over Vanuatu, three, four weeks in advance, and two weeks in advance, there was one higher risk, although the maximum was a bit more to the west. But this shows that already the models here at this time range, there is always some predictability, there is always some change to the MGO. So we are monitoring, we are producing various anomalies, products from the S2S database at this MWF. So far, they are not open to the public, but we plan to make them available for each case studies, for each test, and we produce anomalies for all the members, all the models available for various type of anomalies, several parameters, and several products like anomalies, off-malar diagrams, MGO diagrams, also. We did also starting to do the verification, also for example, a modern J-annussiation MGO still scores from various models, from JMA, GOM, and looking at the multimodal combination, which here for those five models shows that the multimodal combination does actually beat one of the models. So this is one case where actually the multimodal combination doesn't really improve the skill. We can look at the source of predictability, like the stratocytidone warming. This is done for January to May 2015, showing that some models are able to predict actually an index for stratocytidone warming up to 25-30 days, which I think is very quite impressive, shows that we may be able, at this time range, to get the skill from the models. And finally, we also did some work on the regime-based clustering of weather regimes over Europe. So there are the North Atlantic weather regimes. So we have the four main ones, positive and negative. We have the European blocking on Atlantic ridge. And we look already at the skill of some of those models to predict. Those are very useful for different weather regimes, showing that one of the main conclusion of this intercomparison was that in general, actually, CEMDUF was outperforming those models, but that the NEO plus and NEO minus is generally more predictable than the other weather regimes, like the Atlantic ridge. And finally, I think I will stop soon, we look at the regime transition, which is very important. It's not only to predict the occurrence of one regime, but also to see if the model is able to predict well the transfer from one regime to another, to the transition from one regime to another, from blocking to NEO plus, for instance, blocking to NEO minus, NEO plus to blocking, NEO minus to blocking. And this shows the frequency in model versus, which is a shaded area versus analysis, which is those other blocks. And it shows that actually the model has a fairly good job in reproducing the climatology. It's a number of cases where you go from one transition to another. And when you look at the skill to predict the transition from one regime to another, we see that the most model has also some good skill up to day 15, 21. So conclusions. Subsenol is a very important time range, which links weather under climatic communities. So, S2S is one of the three post-torpex legacy projects. What can S2S do for you, for us, is to provide a framework to facilitate intentional collaborations. So there is a database. There is workshop, coordinated experiments. It can influence the funding agency. We hope it can help draw more research into this field under the training promotion of early career scientists. Do you have any questions?