 First lecture will be held by Frederick Bittal. Frederick has a scientific background in weather and climate and over 20 years of expertise in operational probabilistic forecasting and numerical model development at ECMWF, which is the European Center for Medium Range Weather forecasting and provides arguably the best weather forecast across the world. He is also the co-chair of the S2S working group under the World Meteorological Organization. Frederick, please go ahead, share your screen and welcome. Okay, share my screen now. Yes, please. Okay, can you see it? Yes, we go full screen. Thank you very much for this invitation. So it's a pleasure to give a talk at the ASP. So, I will give this talk on sub-signal prediction, which will be sort of the main goal of this talk is to set up the scene, so it will be a relatively general talk, which will try to do a sort of general overview. We will of course have much more details on any components in the next talks. So the question is, what is sub-signal prediction? I mean, there has been several definitions around, I mean, what we call S2S. Sometimes it's referred to the time range between two weeks and up to a decade. And here we'll have a more restrictive view on S2S. We use the most restrictive definition of S2S, which is more the original definition, which is the time range between two weeks of the season. So that this time range between weather forecasting and seasonal forecasting. So if we look at the two weather forecasts in one side, on the one climate prediction in the other side, both of them have a lot of commonalities. For our atmosphere, we use the same type of model, so we solve the same law of physics. But there are some significant differences. So we make a weather prediction that started in the 1950s. So this is a picture of the team behind the first successful weather forecast. That was where the Hignac had been in Princeton, so you have Von Neumann, the second one from the left, Jules Charnais here. So it has quite a long history with the forecasting and it's essentially a natural state condition problem. So here the issue is to the problem is to try to have the best estimate as possible of the initial conditions, so that as much an observation as possible. And then use a numerical model to propagate those initial conditions in time to predict the revolution. Usually those models are relatively used operationally are relatively high resolution models for global models. So we have to start model in operational centers have a resolution of about no 10 kilometers, you have much higher resolution of course for regional models, which can go to a few hundred meters. Usually they have a low complexity, which what I mean by that is usually the boundary forcing for that most very usually persisted use for you often precede SSD CIs. At this time range, it is assumed that those variables don't vary enough to have a significant impact on the weather. What we have to do is to predict the weather at a given location for a given time, as precise way as possible. And following the word by the predictability at this time range at the limit of about two weeks. So this is due to the fact that the small perturbations in the initial condition can give a complete different different solution. Seasonal climate prediction on the ground started also in the early 50s by Norman Phillips. And here, the physical certification is different. Here is more of a boundary condition problem, which is sometimes referred to predictability of the third one kind, which means that the boundary conditions that most of the SSD CIs and on surface very much, very much lower than the atmosphere and other predictability or much longer predictability than the atmosphere. For example, we can predict any new anomalies more than six months in advance sometime up to a year. So if we can predict six months in advance, what would be the state of the SSD in the Pacific, then we can predict six months in advance their impact on the climate. So the complexity of this model is much higher usually than in a medium range because we need to predict those evolution of those boundary conditions. We need a couple, we need an ocean model, a CIS model, and if we go even further in time with climate prediction, we need to predict the vegetations. We need the glacier on the run. Those run for much longer time than shorter than the weather forecast, they have much higher complexity, so usually they have much lower resolution than the weather forecasting typically resolution is typically between 150 to 100 km resolution. And here we don't, we don't predict, of course, at this time range the occurrence of a specific location, but we look at the prediction of future changes in the climate's probability distribution over a large averaging period of time. So S2S is in between those two. And one change, of course, is that it has to combine elements of both, both sort of both time ranges. And so one issue if I come back to this plot is that it's a time range which comes at a time where the steel of the weather that takes very low. And it's a time range which is often considered too short for the boundary condition to evolve enough to bring steel beyond persistence. So for a long period of time, this time cell has been ignored, I've been considered sort of predictability desert. And it's only over the last something 20, 30 years and there's been a renewed interest in this time range. So the history of all this S2S is much, much shorter than for weather and seasonal and climate prediction. So this interest was triggered by the discovery of several sources of predictability. It was found that after all it was probably not such a predictability desert as we thought. And this is the same plot as an issue of the previously, which shows several sources of predictability at this time range. You have the MGO, you have the stratosphere polar vortex, you have NEO, you have the whole land surface, which can also play a role. And those are sort of main source of predictability includes the madame's initiation. So I'm not sure stratosphere initial conditions, ROSBOS, SSTCIs, maybe AOSOLs. So I will go now to describe the two of them with a little bit more details. We will have of course much more details in the lecture later on today on this week. So the first one is the madame's initiation. This is probably one of the most important sources of predictability at this time scale. This is probably one of the pillars of the predictability for S2S. So the MGO is characterized by a convention which forms over the Indian Ocean and then move eastward with a speed of four to eight meter per second to reach the deadline after about 40 to 50 days. So here we have the different phases of the MGO which are taken five days apart. And so we see the convention moving eastward crossing the midterm continent and going up to the deadline. So the MGO continues its propagation around the globe but in the upper level winds. So the MGO has of course a very important impact in the region where the connection is propagating so along the tropics here on the midterm continent. This was also found to have a very strong impact on the Asian monsoon, Austrian monsoons, and also the extra tropics, the Rosby wave propagations, the geotropical cyclone activity. So the MGO has actually a very large impact which affect most of the earth. So as a calculation that we can predict the MGO more than two weeks in advance so that means that if we can predict the MGO more than two weeks in advance. We can also predict its impact more than two weeks in advance which is the one important element for predictability, which is an important source of predictability beyond two weeks. Another source of predictability is from the stratosphere. And this is due to those events called the weak vortex events. And here is an example of one of those events, which took place in January 2021, those events take place about once a year maximum. And this shows the state of the stratosphere on 26 December 2020 which is close to the normal state, the atmosphere where this big vortex of the polar in the stratosphere above the pole. And about 15 days later, about 10 days later we see this polar being reduced on the, on the, on the, on the, on the, on the replaced by this picture. So the temperature above the, above the Arctic so they can increase by about 40 degrees during this period. So this is a very spectacular event, which as I said happened about less than once a year, but which has kind of quite significant impact on the weather 30 days following those events. And there is a sort of a composite of many of those events, which shows that in the 30 days following those events. You tend to get a warm, warm temperature over to this Europe, cold, the temperature of a nurse is to open on the other over the user is close to the US. So if we know now that there is a stratosphere warming for me, then we can say something about what the weather may look like in the 30 next days. So as to express prediction is challenging, because if you look at short range forecast, you expect skill every day there are of course some times where the forecast can be badly worn, but overall it is always on skill. So as to a time scale the skill is very low, and most of the time, but there are times when you expect the skill to be much higher than normal, which is due to the occurrence of those sources of predictability. It generates what we call a windows of opportunity for forecast skill. So for example, when you have a strong MGO or if you have a weak vortex event happening, then then you can expect the predictability to be higher in the following weeks. So here is an example of the prediction skill of models, so that the prediction skill of 1000 hectopascal of the northern of the northern annual mode for which three in the S2S models. So this panel shows the scale at which three though for when there is no, when for neutral stratosphere vortex conditions, which is almost as a time. So we can see that the skill is relatively low, but when you have a weak vortex stratosphere, when you have a weak statistic vortex event is initial conditions, then you can see that practically all models are the skill which is higher following these weak vortex conditions. Similar results are found following strong vortex conditions. For the MGO is something similar to if you have an MGO in the initial conditions, as a model tends to be more skillful than when there is no MGO. So identifying those windows of opportunity is one of the main challenge for the users of S2S forecast to better understand if they can have trust on those forecasts. So another reason why do we do substantial predictions, another reason that was already mentioned earlier is also the need for some application, which will benefit from information at this time scale. So this plot shows for various applications, different type of decisions that will benefit from information at different timescales. If we take agriculture for instance, signal forecasting would be important for making a choice on the seed. But S2S prediction can be useful for scheduling when planting, exactly on when to irrigate on the plant nutrients. For maritime planning, this time range can be useful to designate chip routing. So this is quite a large demand for prediction as a time range, and so there's quite a need for still full forecast and reliable forecast between two weeks on the less than a season. So can we deliver this type of forecast and first of all, or S2S forecast produced. So this is a figure here from the ECMDUF S2S forecasting system but it's similar in other centers. So there is a three basically part, one is the initialization part where we take as much as possible, we collect as much as the observation we can get access to. So this includes satellite data, it includes also buoys, airplanes and so on, and for the ocean also we collect all types of observations from ship data to buoys. And using those observations then we use a data simulation system, which makes use of a model to create the initial state of the model. So then we get the best estimate of what is the current set of the atmosphere, which is a consistent state of the atmosphere between the variables and also as a same grid as the model. And then we run the model many times, we run it many times to sample the uncertainty in the initial conditions. As I said earlier, following Lawrence work, tiny perturbations initial conditions can lead to different solutions. So typically that's the way we run about a 50 member 50 times the couple system for a period of about 46 days, and we run a couple ocean atmosphere model. And then the third stage is forecast calibration on the creation of products. So, so one question is what is appropriate type of products for this time range. Once again, we are between medium range and signal forecasting and medium range and signal forecasting a very different type of products forecast shots for medium range for short and medium range forecasting the shots are often in the form of evolution of temperature on the daily basis over a single point. While for signal forecast usually we show more the probability to be in a helper normal or above normal. So this is the typical chart for signal forecasting. So, as for forecasting is in between. And, and usually the typical products are also lack for signal forecast or it to be in a upper lower tersex before shorter period of time for a period for weekly period of some time be weekly period. So what is the skill at this time range. And so this is an example from the wave model. Well here we look at rock scores or cases of probability score I have no time to go into detail. The main point is that the perfect score is one for the whole score on to the higher the better. And if your rock area which is larger than 0.5 it means that your model is outperformed climatology less than 0.5 means you are worse and climatology. So this represents the rock area computed from the ECMUF model for many real time forecast from the ECMUF model covering several years. And this is a skill for various lead time so we see that of course the skill diminishes week by week. The good news is that even a week four there is no blue area that means that we are always better usually statistically than climatology. Interesting point here is that if we look at medium wind forecast, we can see that the skill is usually higher in the extra tropics than in the tropics. Usually shorter lead time it's more difficult to predict the weather in the tropics than in the tropics due to the difficulty to predict convection. While a single time range longer term range the skill is usually more concentrated the tropics and the skill in the extra tropics is very low, close to climatology. On average there of course as I say there are windows of opportunity where the skill is usually can be higher. But which force that would be much more like signal forecasting on which those two weeks date 2018-1925 where the skill is more uniform are really more the transition between the weather on the longer time scale. So here we can see some skill in the tropics as well as the extra tropics. In terms of skill, are we feeding the gap between weather and signal forecasting? We know that for weather forecasting there has been a very steady progress in the weather prediction. This shows the evolution of forecast is quite the same we have since the 1980s. This is a, this is a skill for day three, day five, day seven, day nine. So there's been a progress of about one, one day of predict is still a decade, which has been quite impressive that has been quite a quite evolution. The signal forecasting on the other hand has seen very little improvements over the last 20 years. And here is a figure from the same way of showing the evolution of the four, the skill for temperature of which three of the other extra tropics over the last 15 years, showing quite a nice improvement. Maybe not as steady as for media mind forecasting, but quite a significant improvement, even if we verify against observations. So, so for me, in terms of progress, so this shows that there's been some progress in the, at this time range, in line with the progress we have seen in weather forecasting, which is another reason why there's been more interest in this time range over the recent decades. And the skill can be high enough to produce some useful forecast for for some applications. So here is an example of the type super typhoon, which is the Philippines into December 2019. So this is a forecast of medium and forecast, which of course show very clearly very strong quality of, of forecast of strike has to be a strong strike probability over the Philippines. And that's a forecast of week four, which was issued in November 2019, we show that for this specific case, there was quite a clear signal in the model already that there will be a high probability of topical cyclone. There will be a topical cyclone strike over the Philippines. So, so showing that for for civil protection, this type of forecast could potentially be useful. It's of course only a single case we need to do much more, much more larger statistics. I have about 10 minutes left, so I will just go briefly towards tools to better understand S to S prediction on one of those tools are databases that do be that we have to be happy to be happy S to S project as a setup. This is the database local dress with database, which contains the daily real time forecast on the forecast from 11 operational centers, which are produced with the same grades and format, and we have about 80 variables available. And this database is hosted at the same U.S. CMA and I on this publicly available since 2015. So it's quite a huge amount of data, which on this graph, this plot show all the data providers. So we have currently 11 of them. You have another database which we will make also make use over the past over the next two weeks, which is a subject, which contains the real time forecast on the forecast from several American models. The change of subject is that the data is really available in real time or S to S as an embargo three weeks because it has to be a research database. But those database can be very useful to identify those windows of opportunity on to assess the skill of model or to have a better idea of all usable. Those forecast can be for some specific applications. So I have no time to go too much into details. But this is a description of the 11 models in the S to S database, which forecast lanes go from 30 today to 60 days. And most of them are on a daily. So there's some of them run on a daily basis with a small and civil size or model of the centers of different strategy. They run the model much less frequently on a weekly basis, or twice weekly but with much a higher and civil size like at the same the way for CNN. We have the reforcast so I don't have much time to go into the reforcast just to mention that we are producing a reforcast because as this time range the model error starts to grow very quickly. And they can have an amplitude which can be as large as the signal we want to predict. So we produce always a reforcast to have an estimate of what is a model climate. And then we can apostolate make correction on the written forecast. So usually the product we show from the S to S forecast. Usually don't do not come directly from the model or your time forecast we need an apostolate correction to remove the system and kick it off from the model. The complexities. I mean there has been a different strategies some models. And I quit a Met Office, use a seasonal forecasting system to produce S to S forecast. Some other centers like GMA use their more than an extension of their medium range forecast to produce S to S forecast in which case, as I say the complexity of the model is less. So usually most S to S model of a couple ocean model on most start to have a majority of them have a native CIS model, which is, which is quite important as you will see in the next two weeks. It's quite important to have ocean coupling for this time range on CIS can also play an important role. So this database can be useful for assessing the performance of the model for specific case studies. This is for example for the Russia it was. This panel here shows shows the temperature to measure temperature anomalies over the week of one to seven hours 2010 there was anomaly more than eight degrees, which is enormous for weekly mean. And this is a forecast from about three weeks ahead from various models, showing that was a sensible mean but that some of them capture quite a very strong anomalies. This is to have had some predictability three weeks in advance. Finally, I mean this database, the database of S to S are also quite useful tools to assess the benefit of combining models. So this blood shows the scale of multimodal of three, based on three models and we have been set upon the methodology, showing that after two weeks. The multimodal is significantly more skillful than the best model. So this suggests that it is for precipitation. So one bit challenge is to the best to understand where the sources of predictability, I mean we have identified some of them, but some maybe some of them are not yet to be identified on what is coming more frequently now is to try to better understand all the source of predictability interact which is something I think we don't have a very clear picture now. And all the other interact with other timescales, for example, and so I mean there has been a recent paper showing that the impact of the MGO is not the same during an annual and in a year. Decadal variability must also have some impact on the source of predictability on the predictive skill as it says to estimate on the global warming. Another issue is the important issue is in terms of modeling the predictability at this time range comes often from remote regions so good as to as forecast you need to have a good prediction of the source of predictability. So let's say for example the MGO, you need a good representation of the take connections which is the impact from one region to another. So for example, all the MGO will impact the NEO. So we need to have this pathway correctly represented on third one is a good simulation of the impact what is the impact of the NEO on the overall for instance. So this means that it's a long chain of event on many things can go wrong in one of them and currently most models have a problem in one if not all of them. The model errors to show to the next step which is a model errors into the initial shock are really a big problem some of those are linear so so it's more easy to remove but some are not linear, which are much more problematic. The model complexity as I say is needed for S2S prediction but an issue is that the newest components for example if you add an ocean couple system, you introduce new biases so you may improve the model in some sense because you have better processes better representation of processes. But you may introduce some biases that can be very destructive. For example, the location of Westerbund occurrence in a couple systems usually the downstream is misplaced and this can have an impact on the pathway for the take connections. And funny, I mean, there's a lot of work on going to try to understand all variable ways with forecast for end users. So we have a general idea of the skill but is that useful really for for now for for a farmer or for for ship routine is it. And can it be useful really oppression by some application and what are the main obstacles for their use by applications. So there is some some future directions one is to. There's a feeling now that the current and several sizes not large enough to give a proper estimate of the property distribution function. And so there's some centers like send away if we're planning to have one of the members instead of 51 to get more accurate estimate of property distribution function. A couple that is the data submission that was mentioned earlier as always quite an important new venue to get more consistent system proper conditions and reducing shock. Regression modeling computer scale may help to reduce errors by, for example, a lot of errors are due to model permitizations, for example, for cloud permitations until we go to much finer scale. The model start to resolve directly those for those processes. And as was mentioned earlier the use of a ML for model improvement can be used for data simulation permitization from but also for attribution and also for calibration. And so we stop here just to say that so the rest of his petition is still in its infancy with a very short history so far. It feels a gap so between weather and climate forecasting on the predictability comes mostly from mathematical conditions as well as from one day conditions so it's to be the mixture of a weather and climate problem. The rest of his predictability is not constant in time, it depends strongly on the occurrence of sources of predictability. The forecasted for weeks three week four is generally low on average but models have improved over the past decades on multimodal ensemble can produce mostly for forecast for precipitation. And finally database such as SubEx S2S are valuable resources to evaluate the impact of the use of predictability for and to better understand the S2S models on the potential benefit and limitation in the use of S2S forecast.