 Yeah, thank you move to our next lecture, which will be given by yaga rector. Yaga is a scientist in the climate and global dynamics laboratory at anchor yaga is also a member of our scientific organizing committee for this ASP colloquium. Yaga graduated from the University of Washington studying gravity waves and their impacts on the climate system. Yaga has since been at anchor she joined and car as an ASP postdoctoral fellow and has spent majority of her scientific career at anchor with a brief stint as an education outreach fellow at the University of Colorado. Yaga colleagues the earth system prediction working group for the CSM modeling system. And Yaga's expertise and scientific interests include gravity waves and their parameterizations in global climate models, middle atmospheric dynamics, QBO, whole atmosphere climate modeling, s to s forecasting and geo engineering. Yaga was has developed the first source of spectrum parameterization of convectively generated gravity waves in the climate model has contributed to development both in the anchor community atmosphere model as well as is contributing to the DOE's earth system model currently. Thank you, Yaga for accepting orientation to give a lecture. Thank you for having me and Asian thanks Amy for giving a great introduction to the stratosphere. I will try not to repeat anything that Amy has said, but I'll focus on the QBO the quasi biennial oscillation and then also focus on the experiments with anchors model that we've done to try to address the role of the stratosphere in s to s prediction. But yes, I have one introductory slide a little summarizing what Amy talked about so Amy focused on the stratosphere polar vortex, and I will spend the introduction talking to you about the QBO which is the quasi biennial oscillation. And this is an oscillation as most visible in the stratosphere zonal mean wind. So what I have here on the right is just a time series from observations and you see these alternating easterly and westerly winds in the tropics, and you see that this oscillation is pretty regular, what it means, and the regular means also that it's very predictable, and the period of the QBO ranges from about 24 to 32 months so it's not exactly 28 months all the time but as you will see later the predictability of this oscillation is very good compared to other features in the stratosphere for example like sudden warmings which can only be predicted about two weeks ahead of time. Why do we care about the QBO, and it's because it has teleconnections to the troposphere. It also has impacts on the stratosphere but for the purpose of this talk I'll focus on how it affects the troposphere and there are three routes of this and this is a nice summary figure from 2018. So first there is the polar route so when you have alternating easterly and westerly winds in a stratosphere, they change the propagation of planetary scale waves into this polar vortex. And by then what we call the Holtantan mechanism is when we have a stronger polar vortex, it happens to be mostly in QBO West and QBO by QBO West we mean that either 30 or 50 hectopascal the winds are westerly. And when the winds are easterly at this altitude so that would be around here so 30 to 50 hectopascal the polar vortex tends to be easterly. And by if you followed Amy they would have implications for the predictability of the NAO. Then there is also a tropical route. So when there is a strong oscillation and winds in the lower stratosphere this is associated with changes in temperature, and there's also an induced meridian circulation. And what we believe is that these changes in temperature near the tropopause they affect the MGO and they affect the deep convection. And then there's the subtropical route. So again when we're changing the meridian circulation will change the medium from by which the planetary scale waves are propagating through and then we can affect the subtropical jet. So I'll talk about a little bit about what we know about the QBO NAO connection and most of this evidence has been on seasonal time scales. And again the QBO is very well predicted. So the red line here shows you the analysis over a few years. And the blue line shows you the model prediction of the QBO from the UK mid-office system and you see this is quite good of a prediction of the QBO so more or less they're getting the right phase of the QBO. And there are some composites on the right here of DJFC level pressure. On the left here we have the forecast and on the right we have the observations. And what you particularly see is this area of high sea level pressure just west of Europe here, and this is captured in the forecast, however the amplitude of this pattern is much weaker than observed. And here's another picture from another study by Scaffidol 2014. Again we're looking at anomalies of sea level pressure. And on the left here is observational analysis showing this negative NAO pattern and the seasonal prediction system does show a similar pattern, however it's much weaker in amplitude. But a lot of the UK mid-office studies have really highlighted that for seasonal prediction the QBO has shown quite the influence at the surface. Now for the QBO MGO, the story has been a little bit more complicated. So in observations there is a very strong relationship between the QBO and MGO. So if we correlate, well let me start here. So what I'm showing here in this top panel is the standard deviation of the MGO filtered OLR anomalies for all winters and focusing on DJF. And if you pick out West QBO winters, so again winters for which 50 hectopascal winds are westerly, so you're just picking those years out. It turns out that the MGO is far less active, and for easterly QBO winters the MGO is more active. And what these observational studies have found that MGO convection is stronger and more organized and propagates most slowly doing easterly QBO. And this is one of the things we don't fully understand, but the hypothesis is that the influence from the QBO comes via the temperature perturbations near the triple pause in the upper triple sphere. So here is another QBO MGO relationship, this one in S2S studies, and this is by Abik and Hendon and based on two Australian models, Axis S1 and Poamah 2. And what they do find and hence predictability of the MGO under QBO East conditions. So in particular, you can see this in these right panels here for strong MGO events that under QBO East the predictability of the MGO is out to 31 days. For example, for the Axis model and out to 30 days for the Poamah 2 model, and it's only at 10 days for QBO West in this model and only 21 days in the Axis model. So in this modeling system, there is quite a robust relationship in terms of enhanced predictability. However, from the same study, if you look at their comparison of RMM percent amplitude difference between QBO East and QBO West in observations versus the models, you see that the amplitudes are underestimated. So the red lines here are the model and the gray lines are observations. So the relationship is sort of there in their model, but it's much weaker than what we see in observations. So Amy Cam also took a look at this in other S2S models, and what she's looking at the relationship, again, of East QBO and West QBO on the RMM skill of the MGO. At the top here, this is averages over the whole entire season from October through March. And bottom is only for DJF. For some of these are US sub seasonal models and there's also ECMWF. And the first thing you see is that if you look at the entire season from October to March, the relationship is not even the same as in DJF. So for ECMWF, it seems that for RMM skill of .5 and you get greater skill in East QBO compared to West QBO, but it's opposite if you look at the entire season. And for some models, there is a significant relationship. So these green triangles indicate statistical significance and you do see that West QBO gives you a little bit higher skill than East QBO. So this relationship in that model is not significant in DJF. So again, the message is that there is a relationship, but it's mostly insignificant or somehow not the same as what we see in observations. Okay, so I'm going to move on to NCAR models and the stratosphere and how we've addressed some of these questions trying to find out what is the role of the stratosphere. And NCAR is a primary research organization. So we really focus on understanding the sources of predictability. And the main question we ask is how much improvement in S2S prediction can be gained from including a well resolved stratosphere. NCAR S2S systems are based on our Earth system model used for climate prediction. So this model was not developed for S2S. It was developed for long time scale simulations. So the simulations basically fuel the IPCC projections. And between 2016 and 2020, we're looking at S2S with CSM1, which is the community Earth system model version one that was developed for the AR5 part of CEMIP. And since 2020 we're using CSM2 and CSM2 with the whole atmosphere model as the atmosphere component. I'll explain it in a little bit. And that's the model that's being used for the AR6. Okay, so let's start with CSM1 and what we've done to isolate the role of the stratosphere. So our system includes the atmosphere and interactive land, ocean and sea ice. And our basic CEMIP type model is a 30 layer model, which has a very poor representation of the stratosphere. So our top is about two hectopascals and there's very few levels in the stratosphere. And that configuration does not have a QDO. And it has a relatively poor representation of sudden stratospheric warmings. So the tropical winds are always easterly. And that's not a model that's 50% more expensive and keep that in mind because it has 46 vertical levels. However, if we actually resolve the stratosphere and now we can have a QDO, it's not perfect. If you compare to observations that are deficiencies, it doesn't quite go down to 100 hectopascals. So compared to what the low top model has, it is significantly improved. And I'm not showing this but this model also has a much better climatology of sudden stratospheric warmings and we know from Amy's work that that is really important for that coupling to the troposphere. So my expectation was that this model will outperform this basic model for sub seasonal prediction. So we've reran the entire subjects protocol, exactly with both systems. So we just follow the subjects protocol hind cast from 1999 to 2015, 10 ensemble members with both systems 45 day runs. We initialize from era interim and we also initialize land and ocean. So there is nothing different in these models in the troposphere, it's really an apples for apples comparison. You can say that the model physics was different and that's why you're having skill. They're really the only difference between these is initially in the stratosphere. So both the stratosphere prediction between the systems, and as expected, for example for the QBO, the 46 level model does a lot better, and especially does a lot better during the West QBO West phase because it doesn't capture it at all. However, note that the scale on these the correlation between observations is still very high. So, even at the end of the forecast period. This model still has a correlation skill of about point eight seven, which is really quite good. And similarly for the polar vortex so we look at the normal correlation coefficient of the 10 hectobascals winds at 60 North, and as expected the higher top model with a better stratosphere, it predicts the polar vortex better, and the 30 level model a little worse. I don't figure, but there was no notable difference in prediction of SSWs, and mainly because SSWs are rare so in this 20 year record I think there's only 11 events are a hind cast were always started on Wednesday so it's also hard to get a clear picture because we're not starting the exact the same amount of these before each sudden warming, but nevertheless we didn't from the statistics that we had we didn't really note a different skill and prediction of sudden warmings between these two systems. Okay, so we have better stratospheric predictability. How much difference does it make to surface prediction. The answer sadly is very little. So the good news is, and the performance of our model is very good so what I'm showing is here is surface temperature for different seasons. So DJF, DJ, etc. These are the two and car systems. So comparing that to other subjects models here is the no SCFS V2, there's FIM, GS and other models. So what you see is that overall predictive prediction skill of the income models is quite high which indicates that we are doing something right that it's a good modeling system for S2S. So if we compare the bars for the 30 level model and the 46 level model there is very small differences between them and they're not statistically significant. So the overall prediction skill from the system that has 50% more cost because that's what we're paying for the stratosphere is not translating to an improvement at the surface. And here's another skill score diagram. The 46 level model 30 level model and what these bars are showing is the percent chance model will have a more skillful forecast and CFS V2 based on the Briar skill score. And yeah, there is a slightly better skill from the 46 level model, but again it's there when we do any statistical analysis it doesn't show up as anything better. What we do find is that if you actually combine all the models and use the skill from all the modeling system that you do get a much better forecast. So if you include the information from all of the models that you have a 78% chance of being more skillful than just using one single modeling system in particular here the CFS V2. And you may say well what about the NAO and what about the NGO because those are the two likely things there are to be affected by the stratosphere. And again in our study we did not find any significant difference so the 30 level and the 46 level model have the exact same NAO predictability and the same NGO predictability. I was a little bit disappointing but I think that is the part of research, and it doesn't mean that the stratosphere is not important, and I will come back to it in a little bit. Why I think we're not seeing this difference at the surface and again it's not because the stratosphere it's not important so. So we move on to talking about CSM to and CSM to Wacom. So we did a similar study with our latest model, and again CSM to is just this low top model, we have now 32 levels, but it has the lid in the same place at about two hectopascals. It has no interactive chemistry. It's a very fast model compared to the CSM to Wacom Wacom stands for the whole atmosphere community climate model. And this is the most complicated model that I think is being ran for S to S prediction it has a top at 140 kilometers. It includes parametrizations of nonographic gravity waves. In addition, it has fully interactive tropospheric and stratospheric chemistry. And this is where a lot of the cost of the model is. So the computational cost of this models is eight times the cost of CSM to so not only 50% were expensive this is eight times. It's part of the reason we can only run the hind cast from October to March, and we can only right run five ensemble members because just the cost is prohibitive. And we looked again, what is the skill of this model compared to our low top model and compared to the previous versions of the model. In parentheses here you see the number of ensemble members that we compare because with Wacom we're only able to run five members, but we know that if we run 10 members we should get higher skills so for example, the green bars are for CSM one and you see that the with 10 members you get a little higher skill than with five members. So if you look at all the bars that include five members of pink, blue and this light green for any week for surface temperature, you see that the skill is very similar. And same for precipitation on the bottom right here there are small differences but none of them are statistically significant. So more or less, all of these systems are giving you the same surface prediction skill. And here is a more detailed view of that for DJF to meter temperature, just so you see if there's any regional differences. And in particular, in weeks three and four between Wacom and CSM two, there is virtually no difference. And this is partly interesting to us because I didn't mention our Wacom system and see the low top system are not started the same way. One of them is started from CFS to the other one is starting from Mira two analysis that we nudge to and they also have different ocean initializations. So in addition about the role of the stratosphere it's also taught us that how exactly we initialize the atmosphere and the ocean is also really not making much difference for week three for prediction and least for surface temperature. In weeks five six you see a little bit more difference. These are the areas shown here are the ones that are statistically significant, but at this time it's difficult to pin these down to any particular aspect of the model, for example, because there are so many differences are rather small, which indicates that there's just some inherent things in the predictability from the initial conditions that are really driving that and the details, whether we use CFS V to Mira naging etc they're making much smaller difference. And again here's the NAO and the MJO, and there are a little bit more differences in the NAO for Wacom system so with five ensemble members, there is higher skill than for the other modeling systems. However, due to the small number of years so high cast over 20 years these numbers are not showing up to be significant so more or less we're finding that the we have really good NAO skills up to weeks three for all of the systems are almost at a CC of point five, but they're more or less just have the same predictability and the same thing for the MJO there's very little difference between these model versions. Now the question you're wondering why is there so little impact from the stratosphere or at least from better resolving stratosphere is maybe the better way to phrase the question. And one thing is that global models don't capture teleconnexions very well so we're utilizing here a global model that's used for climate projections. And this is a study by NC at all on teleconnexions of the cubio to the NAO utilizing a whole bunch of different models with actually good cubios. And what you see here the stick line here is the observe correlation. And these individual bars, you see are the correlations of the cubio to the NAO from other models. And you see only one model even reaches the observe correlation and the other models even have the opposite sign. So the models are bad at capturing this relationship. There are even worse at capturing the connection of the cubio to the MJO. Here's another study by Kim at all. And similarly here the observed cubio the observed cubio East minus cubio Westline is here. So this is where observations are falling. And this is every circle here is an ensemble member of these models here so for example ECM E3S1.0 there's only five members. But what you see that the correlation in these models does not even come close to what we see in observations. So most of the semip type models the global models are simply not capturing the cubio to the MJO teleconnexion. And it's difficult to expect that they will actually capture this really well when you run it as a sub seasonal to seasonal prediction model. And here's the other reason I think largely we're not seeing these huge differences on sub seasonal timescale is the stratospheric initial conditions they basically hold on to their initial state. So for example if you look at the zonal mean wind between 10 south to 10 north and observations, and in the 46 level model, and in the 30 level model over this forecast period. You see yes that the low top model degrades a lot more than the 46 level model, however more or less the phase of the cubio is kept, and you probably getting a bulk of the effect. Now if you take in this out to a seasonal timescale I don't think that would be the same situation you can see that the, especially if you were in the cubio. West face that the sturdy level model will have really poor predictability as you get out to few months because it's rapidly declining. But if you initialize a model with a poor stratosphere with a particular phase of the cubio it actually holds on to it fairly well through this short sub seasonal timescale. And here is a figure from limit all 2019 to showing cubio correlation at one month. And you see some of the models which have good cubios for example UK MO the correlation there is 1.0 which is super high. The model with the worst stratosphere BOM still has a pretty high correlation of point eight fives. So I think the messages that on this sub seasonal timescale if you initialize a model even with a poor stratosphere, it's going to hold on to that initial condition and that's why we're not seeing the drastic results. So, so that doesn't mean that stratosphere is not important, it actually is. And so in this study led by Lantau Sun from CSU. We combined our CSM one ensembles with the 30 level and the 46 level model to just to create a 20 member ensemble. And then we looked at NAO predictability from different stratospheric vortex states. So here's the initial zone on mean wind at 10 hectobascals 60 north. And you see the vortex is highly variable, and we separated into strong vortex states weak vortex states in states where the vortex was neutral. So we're starting to see how the stratosphere matters so especially in week five and week six. There is much better predictability from weak and strong vertex events as compared to the neutral vertex events. However, there is no difference between a low top and a high top model because they both seem to represent that similarly. Alright, so research always brings surprises, and if I have time, I'm going to give you one more example of this. So Amy mentioned a lot about sudden warmings and she showed you how they cause surface anomalies about one month after. So you have this cooling pattern over Eurasia and warming pattern over the northeastern United States. And we just happened to a we have a recent financial spirit warming on January 5 2021. We were super excited by that because we were running our CSM to welcome system in real time. So we could actually make a forecast and see how it verified. And here is the CPC verification so about for weeks before after the event. And you see a pattern very similar to what's in Amy's plot over here so you see a lot of cooling over Eurasia and a little bit warming here on the eastern United States. So we were excited to see how Wacom did for this event. And it was actually one of the two models who did really great so this is the Eurasia square so correlation coefficient square against the CPC verification and out of all the subjects models ECC GM and the NCOR system seemed to perform really well so they both showing this cooling over Eurasia and here you have warming over Labrador C this is what you were saying. So we said oh my gosh this is great we finally seeing an impact of sudden stratospheric warming and this must be from the sudden stratospheric warming right the pattern is almost exactly the same. However, we took our research system and decided then to take out the sudden warming. How did we do that. Well because to start our model we nudged the stratosphere to mirror reanalysis, we just backed out for two weeks. And we nudged, we stopped nudging the stratosphere to observations but just let it go to climatology, but we nudged the stratosphere perfectly. Basically we scrambled the stratosphere, and we kept the realistic troposphere so in these simulations there was no sudden stratospheric warming. We also scrambled the troposphere and did other configurations but I'll focus on this case. So here's the regular forecast. And the bottom panel is we scrambled the stratosphere so there is no sudden warming there is only we kept the initial state from the troposphere only. So we see that the correlation between these is very high and the correlation between this scrambled stratosphere forecast is also very close to the observations. So our study shown for week three four and we didn't look closely for week one two and the weeks beyond. It actually showed that if the warming didn't happen that we also would predict the same surface pattern so perhaps the stratosphere. The stratosphere contributed a little but not a whole lot as much as we thought to this pattern. Excuse me to the last slide that we've developed this as to a system with CSM to to enable the wider community to do these experiments and to do other similar experiments to really get at the sources of predictability. So since April 2020 we have the earth system prediction working group within CSM. And again the goal is just to advance fundamental understanding of these sources of predictability. And CSM is a community model so you welcome to download it we're still don't have the initial conditions available for larger use but if you ask we can get them to you. And we already have just basic hindcast sets. We also have smile which is a similar seasonal to multi year large ensemble that Steve Yeager has been leading that is two year predictions and therefore starts per year so we can look at the longer models. And there's also smile x in production which is extending those forecasts to 10 years just from the November start dates. So, and here and take any questions you might have. Yeah, this really comprehensive overview of influence of QB or maybe lack of influence of the stratosphere on short timescale.