 Welcome to the second last day of the SOS workshop associated with the 2021 AST summer podium. It is my pleasure to introduce Falco Yut as the first speaker. Falco's interest reach from predictability and dynamics of high impact rather to tropical meteorology, tropical cyclones, NWP and AC interactions. He's currently involved in the field campaign investigating hurricanes. And he's done some really interesting work and predictability using high resolution models in the statistical case and pass. And I'm excited to see what he will be presenting today. Welcome Falco. Thank you Yubit. Yeah, and thanks for the team for inviting me to give a talk here. I hope you can see my screen now. Looks like you can. Yeah, so I'm actually not a SOS guy. I'm more of a weather person. So I'm not gonna talk about predictability today. Maybe a little bit more geared towards weather but we'll see later in the tropics that weather and SOS cannot really be suffered that there's a continuum. Yeah, so I'll talk about predictability today. But maybe this first and foremost, I also wanna showcase what we can do with the next generation of global models which are high resolution, convection permitting global models. What we can really do and how these models can improve weather and climate, weather forecasting and climate projections. And to basically, to what you're appetite about these models, I wanna show this animation. This is actually a 2.5 kilometer global model. And when you look at it, you will probably say, well, that looks like a satellite image, right? Or a satellite loop. We have a hurricane spinning here in the Atlantic. We have fairly reasonably looking, chopping the convection and other stuff. So this new class of global models really is simulating the atmosphere more closely to the real thing than we've been ever able to do before. But one of the questions that really, we haven't really answered is how far into the future can we predict weather? And whether here I mean more general, not just your day to day weather, but also seasonal to sub-seasonal or even a longer time scale of the whole climate system. So how far into the future can we really predict the atmosphere of the whole earth system? And well, that's a question that's been really with us for a long time. And I wanna give a brief historical perspective into this because then we go back a few decades in time that was at the advent of numerical weather prediction. The first computers came along and it was in the 1950s and 60s, 1950s and 60s. It was a really optimistic time in terms of what technology code achieved. They were thinking with me driving a nuclear power cars by today. And they were wrong in some other ways too about what they thought we can do with the weather prediction. So it gets very early on when the first computers came along and they ran the first weather or climate models, they thought, well, our atmosphere is a deterministic system and a deterministic system really means that the present determines the future, but we all know we don't know the present perfectly, but they thought, well, if we approximately know the present, we can approximately or the approximate present approximately determines the future. So that was the mindset of most of the scientists in those things. And so they thought, oh, our computers get better, our initial conditions get better. And then forecasting the weather is like forecasting the position of the planets. And I show this with this graphic here in the lower left where we have a hypothetical forecast in blue and let's say that's the temperature in Boulder and the observation in black. And you see, that's what they thought we could do. So, well, we start out maybe slightly off, but in the grand scheme of things, the forecast trajectory or the observation will not deviate too much from the forecast. Well, now we know that's not really true because the atmosphere is a chaotic system. So what does it mean? Well, it means we're still thinking of a deterministic system, but once we involve chaos, the approximate present does not approximately determine the future anymore. Something goes off, something goes haywire. And even a deterministic system appears like a really random system. And on the right there, you see the Lorenz butterfly that is the poster child of a chaotic system there. And on the left, there are two time series again, that's what we see in a chaotic system where we have a forecast in blue and an observation taken after the fact in black. And we see early on they're quite close, but then at some point they're completely off. So in the latter part of this time series, there is no agreement between the two curves anymore. And well, the people or at least Lorenz, he discovered the whole chaotic system here. He was probably one of the most influential atmospheric scientists that ever lived. And nowadays, there is a scientific consensus that the atmosphere is a chaotic system with limited predictability. And we're still really trying to figure out what are the limits of predictability? It's a little bit controversial. We haven't really found the limits of predictability yet. And it's still an active area of research. So for the remainder of this talk, I wanna talk about predictability and error growth. And I'm not sure if everyone is familiar with these concepts. So let's step through them. Let's assume we have a forecast here in blue and we have observations that could be real observations, but it could also be fake observations in the sense of a control, whether a climate simulation. And these are different, especially later on in the forecast or simulation. And the difference between the two, that's what we call the error. So the error is a measure of how good the forecast really is. And in the bottom panel here, what you see is when we average over many instances of forecasts and observations or many ensemble members or many instances of grid points, we get a relatively smooth curve and that is here a time series of the error. So you see, if you compare the lower and the other panels, it becomes pretty obvious what's really shown here that the error is a measure of forecast quality really. Well, so the error is very important in predictability studies because the error really determines when the predictability limit is reached and theory tells us that's when the error saturates. So in this graphic here, in the bottom, you see around, let's call this days, that's non-dimensional time, but around time 16, 16 days, the error flattens out. So in this case, the error has saturated and we would say, okay, at time 16 here, that's when we have reached the limit of predictability. Theory tells us there's also another way to determine the saturation limit, as we call it, because it just happens to be twice the climatological variance. So that's really neat because often our error curves that actually later aren't as nice as this one here in the example, but we can still compute the saturation limit from the climatological variance. So this is the whole concept. We have a saturation limit. We compute the error based on forecast and observations or forecasts and control simulations, and we just measure it in time and we'll see at what time does the predictability or the error curve reach the saturation limit and that's how we determine the limit of predictability. So during my post-doc, I actually set out to do this, this is fairly easy concept with an worker model here, N-Card, it's called N-Pass, and I thought, well, we really wanna see the limit of predictability using a high resolution simulation. So convection permitting global simulations. And at that time, it was still relatively new but it's becoming more mainstream now. And so I set out and did this identical twin experiment. So we take a control simulation, which we say this is reality, it's fake reality, but it's a control simulation and run it for 20 days for the time period you need to see indicated there. And then we use the same model, same initial condition, but just sprinkle a little bit of initial noise into the initial condition. And what I mean by that is take the exact same initial condition, take the exact same model, but to the temperature field of the initial condition, we add some random stochastic noise that has a mean of zero and a standard deviation of 0.02 Kelvin. So this is immeasurable. If you think about it, could we practically determine a difference between these two simulations by thinking in reality, taking a thermometer? No, we could not because our errors are larger than 0.02 when we go out and measure the temperature. So for all intents and purposes, we cannot tell the difference between these two simulations. And as you will likely guess, in the end, they will end up being very different. That's just really the proverbial butterfly effect there. We're adding little butterflies to the initial conditions and then we run the perturbed simulation forward. Now we're not doing this for a whole ensemble, which we probably should to get robust results, but it's just very expensive to run these high resolution global simulations. And then how do we measure predictability? Well, I explained before, we measure the error and just look at the error evolution in time and the error metric here is the difference kinetic energy. It's just the square difference of the zonal and meridional wind components. Just a few words about the model. It's called M-PASS model for prediction across scales. It's similar to Worf, but it's on an unstructured mesh. It's global and you can really use any resolution you'd like. This has a global uniform resolution of four kilometers. And here in the graphic, I just plot the topography field so you can see how detailed this really is. You can see all the details in Colorado. You can almost see your own house in here if you zoom in more. But it's really one of the most detailed global simulations that have been used for this kind of purpose. They were initialized with ECMWF high resolution. I don't know. They were actually initialized with error interim data. So initial conditions come from a low resolution source and that's why we have a spin up of 24 hours before we start our experiment. Anyway, so in the end, what we get is when we look at the time series of the error, we get this curve starting out at zero, which is close to zero, which is just the initial condition noise there. And then as we run it forward and measure the difference between these two simulations, we see that the error keeps slowly creeping up until a time of about six, seven days. And then it starts increasing. So this is an exponential system here until about two weeks and then some non-linear effects take in and the error curve approaches the saturation limit at about 16, 17 days. So this is just one real realization. So we shouldn't read too much into it, but what this tells us the global atmosphere has an intrinsic limit of predictability. That's about, I would like to say two to three weeks. This is, as I said before, this is one realization where the predictability limit is more or less exactly 17 days, but a different initial condition and different or more ensemble members would probably give you a smoother curve here. Anyways, if we think about what this graph really tells us, there are some profound, really interesting things here. I always like to say that running these simulations, each simulation costs about $20,000 in compute time. If you convert compute time into money, it's about $20,000. And so you're running a forecast that costs $20,000 and after 17 days, your error is as large as if you had just suitably looked at climatology and picked a random day and used that for a forecast. So it's amazing how chaos really destroys the predictability in the atmosphere, if you think about it that way. But later on, I will come to a few points where there's still hope. So don't be too pessimistic about this. But what this figure tells us really is there's a limit to the predictability of the globally average, this is globally average atmosphere of about two to three weeks. And most people do these studies with globally average data. And I thought, well, that's not really fair because our atmosphere is very distinct, right? In the tropics, we have very different dynamics compared to the middle attitudes and the polar regions highlighted here on this globe. So I was thinking, well, we should probably look at atmospheric predictability in different climate zones and not just the global average. And so that's what I did with the same data instead of computing the globally average error evolution we did for the climate zones. And this is what we'll get. So again, to refresh what you've seen before, this is the global average, a little faint here in red. And let's look at the polar regions and the middle attitudes. Well, they look fairly similar to the global average. They're a little bit noisier probably because we're averaging over a smaller area. But in general, they traced what we saw in the global average, I wasn't too surprised in that because we know that the predictability limit for middle attitude, whether it's about 10 to 14 days maybe a little bit longer depending on who you ask. But then the interesting thing was when I look at the tropics and let me move this away. So when we look at the error in the tropics, at least after 10 days, it's actually much smaller than in the extra tropics. And this was a little bit, I was surprised by this because coming from the weather side, I was always or we usually have the impression that middle attitude weather is easier to forecast as long a predictability than a tropical weather. And this graphic tells us, well, actually in the tropics you have better predictability. Remember the predictability limit is when the error curve hits the saturation limit. The saturation limit is highlighted here in the black dash curve. And at 20 days, that's how long these experiments were. In the tropics, we're not there yet. So that formally tells us that the predictability of the tropics is longer than 20 days, much longer than the middle attitudes, which is only two weeks, three weeks. So that was a new interesting finding. And so the question is, why is this? This is counterintuitive to what we in the weather community have been thinking for a long time. And it turns out, I think it has to do with equatorial waves. And I'm gonna explain this here in short. So equatorial waves are these weather phenomena that live in the tropics that propagate along the equator, bring alternating periods of rain and dry weather. And they come in different flavors. Calvin waves here on the left, Roscoe waves, Rick's and Roscoe gravity waves. They're also inertia gravity waves. So different kinds of equatorial waves. And so I set out to look at the, to see if this extended predictability in the tropics has to do with equatorial waves. So first we'll define equatorial waves in the long data. And this shows essentially how we do that in the background here. That's a popular plot. So a time long into anything that's tilted on here in the in the field colors is propagating signals. And you can filter them with the wave filter and then you get the components. So these are in the contours here. That's for Roscoe waves. It will get this. It's stuck at the moment. Well, here it is. This is for Calvin waves. And then we can do the same thing for meteorological waves. It's going to do quickly now. So we see there's propagating signals all over this model in the tropical regions. And now what I looked at was the error of the waves themselves. So what we're seeing here on the left in the control simulation, that's our fake reality. We have the wind magnitude of Calvin waves. And also plotted in a half-millid diagram. So we see, actually these are Roscoe waves, I think. Yeah, they're going westward. These are Roscoe waves. And then we do the same for the perturbed simulation. And if you have limited predictability at some point, you would think that these signals are completely scrambled. But looking at these two figures, they actually look very similar throughout the whole 20 days. And that's actually the case because when you look at the error, oops, I went too far. On the right here, that's the error. So that's the difference between the two fields. And it's fairly faint throughout the first 10 days. Later on, there's some error, but the magnitude of the error is still less than the weight and magnitude in the two simulations. So that's a sign there is predictability because the error is not as large as the signal itself. And we can do it for Calvin waves, same thing. So the control and perturbed simulation look very similar and the error is small. If I were looking at mid-latitude phenomena, you would see a much larger error here. And then this is for mixed-brust and gravity waves. And again, so control and perturbed simulation are similar throughout almost 20 days, even though chaos is supposed to be there. But for some reason in the tropics, the tropics behave differently than we thought. There's, I don't know if it's linear wave theory, but if we again look at the error evolution in the tropics and then we overlay the error curves from the respective waves, they actually kind of match what we saw. We just take square differences of the wind fluid. So that's my hand-wavy argument that predictability in the tropics comes from equatorial waves. And the question is, well, why are forecasted the tropics so bad, right? So right now we do much worse in the tropics than in the mid-latitudes. And this shows you what I mean by that. On the left, that's a hot-loaded diagram of observed precipitation. We see nicely propagating bands of precip. And in the middle on the right, that's the GFS and the IFS model. And they just don't produce waves. It's just the precipitation is anchored to certain longitudes, but it doesn't really look anything like the left, the observed precipitation. And I think what really is the matter here is a keyless parameterization. So in M-pass, in M-pass, we have observations here on the left. This is from a study that just came out in GRL. And I'm gonna step through quickly. We ran M-pass with different resolutions. That's at 480 kilometers. So really course doesn't get the propagating waves. 240 kilometer still doesn't get them. 120 kilometer, that's about what nowadays climate models can do. Barely get the propagating signal. 60 kilometers, still not. Even 30 kilometers, there are hints of propagating signals. 15 kilometers, maybe a little better, but still you see the presentation is really stationary. Now things change when we go to seven and a half kilometers. That's when the keyless parameterization turns off. And suddenly you get these propagating wave packets and they even become better at 3.75 kilometers. So I think this graphic nicely shows what's wrong with our current models is parameterized convection. So for some reason, we're not getting the propagating waves if we run the model with parameterized convection. As soon as we get to a high enough resolution and the parameters convection shuts down, convection is explicitly resolved. We get much better representation of equatorial waves. And that means we could now harvest or exploit this predictability in making better forecasts. And that's really all I had. So I was looking at the predictability of weather, but this is really more general. We can also call it weather or 20 days that's already getting to the sub-seasonal timescales. And in the traffic, it looks like that the predictability limit is longer. So at least 20 days, actually we don't know it because the simulations were only 20 days long. Who knows if it's 30, 40, 50 days of predictability in the tropics. From the extra tropics, it's really maybe two weeks, maybe up to three weeks. And the extended predictability in the tropics seems to be related to the equatorial waves. And we cannot exploit it yet because our models are really poor at simulating equatorial waves. So we need global models with explicit convection that produce nice equatorial waves. And then we can exploit the predictability and actually produce better forecasts in the tropics in theory, better than in the mid-latitudes. Yeah, so that's all I had and I'm happy to take any questions. Thank you very much. Thanks for the talk. It was really different from what we heard, but super interesting. Yanak has a question, go ahead. Thank you for the great talk. I learned many things. So I have a question like in the model error curve that seems like it's always increasing, monotoning. Could it be possible like it might sort of sometimes increase, other times decrease? I think so, yeah. I think so, I think this is just one realization. So it's not really robust. I think when you run more simulations or forms almost you would see where it's sometimes increasing and sometimes decreasing. Yeah, so in that case, like how to determine the predictability, like how to make comments. Again, it might depend on variable chosen. Yeah, so in that it true, it depends on the variables. And I was specifically showing wind variables here. I am going to look at moisture variables and you may get a different answer. If you look at moisture versus wind, but in that case or going back to the robustness, so if you have an Osama, you would just average over. I would think in the end, you would also get a relatively smooth curve. And I suspect it will be the same for other variables, but I am going to look at moisture variables because in the tropics, they may be a little bit more useful than the wind variables. Thank you. And Sam had a question. Oh, hi. Yeah, I really like your definition of predictability. It's a really nice quantified way to define it. Is there any statistical theory, theoretical base for the definition of two climatological variants? Yeah, so there is a theory behind it. But if you're asking about it now, I don't remember. I knew it, I think, when I did my PhD, but I had forgotten. It comes out of ensemble theory. So it comes out of ensemble theory and it's definitely, yeah, it's weird that it works out that way, that the climatological variance is just twice the, or the saturation limit is twice the climatological variance, but it does come out of a nice theory. I think it's a literature in paper by Lloyd Becher and Palmer 2007. That's not the original reference, but there are references in there. Okay, no, thank you. You probably know more about that than me. I found it puzzling every single time. And then you do it and then it becomes clear, but yeah. Yeah. Hi, go ahead, ask your question. Okay, it's a nice talk. Thanks. Could you go back to the diagrams that shows the error increase for the tropics? Yeah. Hold on, I'm gonna bring up the slide and then I'll share on the screen. The saturation, the line for saturation, you calculate the separative for the tropics and for the globe? Yes. So hold on a second, are you talking about this theory? Oh, yes, yes, the saturation limit. Is that the stem for the tropics? No, it's not, so it's not the same. That's why I normalized it. So it's in percent, because if you use the absolute values, it would be, they would be all different. So to compare them, I just normalized by the saturation limit. So what you see here is just the error saturation not the error magnitude. Okay, also I also noticed that at the beginning in the first five days, the tropics, the error increased much faster. Yes, yes, that's a good observation. I think that's where the convection comes into play. So during the first couple of days, the error does grow faster in the tropics, probably because you have a lot more convection there. Yeah, probably that's the model of parameterization is not the best. And also, do you think in the tropics that the error grows slower, that's also caused by the more persistence in the tropics? Yeah, and I should probably note here at this point, this is all done with an uncoupled model. So the SSTs are prescribed and they're the same in the simulation. So you could make an argument that the tropical variability here is artificially long because I use the same prescribed as a sea and it's not a cup of water. Okay, thank you. Thank you, Anish had a question. Yeah, I think it's related to the discussion about the initial error growth from convection falcos. So Tobias Schultz and George Craig, I think they had this work where they had done regional but convective resolving simulations over Europe and looked at error growth timescales, right? And what they showed was like in the first six hours you have this convective error growth, which is a rapid error growth. And then you get the synoptic error growth, which is slower, but it still leads towards the saturation limit. And then you have error growth on the measure scale and larger scales. But the question is, as we go into convective resolving global models, will this become a bigger problem for us that if we get the convective initiation wrong then the initial error growth can rapidly get the models away from where we want to be from the balanced attractor space. Yeah, I think you're right, but I think there's still added value overrunning coarser simulations with a cumulus parameterization because for example, you don't get any good equatorial waves. So there are phenomena that you simply don't get when you have parameters convection that's kind of separate from the error growth issue. So I think they're two separate. I think for error growth itself, you're right, Tobias has shown that it doesn't really matter if you have cumulus parameterization or explicit convection, it's just the error growth is very fast early on. But then at least with the explicit convection, you do get other phenomena that are more realistic than in the model with parameterized convection. So sadly, we don't have long enough simulation yet to look at really MGOs in a statistical sense, but my guess is that we actually do get better MGOs because equatorial waves are kind of related to MGOs, they're all in there. So I think we're not producing now our current models MGOs, but they're not really realistic because we don't have any equatorial waves in there. Yeah, we can have a longer conversation. My question was in terms of stochastic modeling and should we rethink stochastic modeling if the error growth has a different nature when we resolve- Yeah, so I think error growth, though, but I think, again, you're using a coarser resolution model with stochastic noise, I doubt you'll get equatorial waves that are realistic. So just from an error growth perspective, you can simulate convection or you can simulate error with stochastic methods, but not sure if it would translate to getting coherent features that explicit convection produces, but maybe it does. I'm tempted to say it sometimes. Yeah. I'm going to jump in with a question before Jacqueline. And this is, I was wondering if you could share your thoughts on the classical predictability has all been done in terms of saturation spectra and spherical harmonics and you differentiated this with looking at the different latitudinal bands. But then when we talk about S to S predictability, it is all related to state dependence like MDO and so et cetera, NAO. And so you clearly touched on the base by looking at the spectra, but I was wondering if you could reconcile the state-dependent view with sort of the homogenous turbulence you in terms of predictability and also if you could comment in this context on state dependence. So would those spectra look the same if you pick another 20, 30 day period where the MDO might be in another region? So I think they will look different. So first of all, to reconcile these two views, I've been wrestling with that for years. I still haven't found, I think they're just two sides of a coin that they measure more or they want to measure the similar things but they don't. So I think the spectra are not really useful for looking at the NGO predictability and stuff. So the state dependence, yeah, that's a good question. I think it's very important. So I think these error growth curves would probably look different when I use a different initial time. And you'd probably get periods where you have low error growth and you use a different initial condition, you suddenly get a faster error growth. But yeah, we haven't tested that because simply computers, computers balance. But yeah, it's it. So I think, and I don't know how to essentially know that beforehand, which initial setup could give you longer predictability where it can actually make weather forecasts for maybe a month, it's possible. MGOs, and so MGO are in a very predictable state. Some of you can make weather forecasts that are three, four weeks. But I think it just, there has to be, there needs to be research done to really look into that and maybe to reconcile the spectra view. I really think the spectra view should, where the past or it worked well in the past. It doesn't really work well for kind of our modern models or modern science questions anymore. Yeah, it works for homogenous turbulence. We should talk more. This is a very interesting topic to me. Jacqueline, you have the last question. Hello, thank you so much. That was a really good talk. I was wondering, so you didn't mention explicitly that your simulation was coupled to an oceanic model. So I'm assuming that it's just an atmospheric model. Yes. So I was wondering, given that the tropics is an atmospheric ocean coupled system, I'm suspecting that like the ARC fluxes are like part of your errors, you know? Yeah. So I was just wondering, do you think if you couple, I know it's expensive, but just thinking about it. Like if you can couple that atmospheric model to an oceanic model, will you get better predictability? Will it be worse? Yeah, I just want to- Yeah, yeah. So well, thanks for that question. I get that question every time when I talk about this topic. And the question is, I don't know. So my guess is that you, it's kind of, well, you can make the arguments that if you have this coupled to an ocean, so the errors can maybe amplify, so your errors grow faster, but you can also make the argument that maybe the ocean drives the atmosphere more. So if you have an interactive ocean that the atmosphere follows the ocean and maybe gives it longer predictability, I think it's very case dependent. And in the grand scheme of things, I would hope that a coupled model would give you lower error or longer predictability. And finally, this year, we're supposed to have a coupled MPASS model through CESM, essentially. So I think at the end of the year, we can actually look at that. We're planning to run simulations that are coupled. So after years, finally, MPASS will be a coupled model later this year. Oh, it's a really good one, isn't it? At one degree, right? Thank you. Well, we're gonna test high resolution to her. Okay. Thank you so much. Thank you for this inspiring talk, Falko. Our next speaker will be Sergei Frolov. Frolov, he also gave a talk last week. So thank you so much. Sergei used to work at NRL on couple data assimilation and a while ago joined NOAA and is now focusing on the development of the couple of the analysis using UFS. Sergei, we're looking forward to your talk. I think Sergei is there, but I saw him get a call and he stepped away. Okay. Sure. So Hemi had a question for Falko on the chat, maybe. Oh, I'm so sorry. I apologize, Hemi, I did not see it. That's fine. Thank you. Thanks for the nice talk. And my question is about the cumulus parameterization. So do you think there is a way to improve the equatorial waves with parameterization instead of increasing resolution? I'm skeptical because very smart people have worked on cumulus parameterization for decades and for some reason we haven't done good enough to produce equatorial waves. So I think in my personal opinion is just doing the brute force method, just essentially being intellectually lazy and just run higher resolution models is better than thinking about how to improve the parameterizations. That's purely for forecasting. If you wanna understand the system, I think thinking about it and maybe building better parameterizations is possible, but for like, if we want to have better forecast within the next five years, I think the way is just to go to the convection permitting simulations. And as far as I know, ECMWF is gonna go that way the UK Met Office is. So we're already on the route to running global logs without cumulus parameterization in many centers. Thank you.