 Okay, we get started. It's my pleasure to welcome our next plenary speaker, Lucas Harris. Lucas is the deputy division leader of the weather and climate dynamics division at NOAA GFDL. Lucas and I actually met for the first time I think back at one of the ASP summer schools about a decade ago or so. It's a nice coming around. Especially for the students attending the ASP colloquium, you will know each other for a long time. It's a pleasure, Lucas. Thanks again for accepting and inviting. Oh, thank you so much, Anish. Thank you, Judith also. Thank you for inviting me to this wonderful colloquium. I've been lucky enough to attend some of the talks in my schedule and yeah, the talks have been fantastic and I hope to measure up to some of that pretty high levels, pretty high standard that's already been set. So anyway, I like to talk about our S2S prediction efforts at GFDL in particular two things I want to emphasize are the idea of seamlessness going all the way from climate scale down to relatively short term weather prediction and using models that are developed from both ends to help close the S2S predictability gap. What I want to discuss is the value of a seamless system for going to sort of two ends of the of the prediction realm. One is the coupled climate prediction and you coupled to a dynamic or mixed layer ocean, something that GFDL has been doing for over 50 years now, or going down to the convective scales of a few kilometers in which convection is explicitly represented. So I want to start out with the third generation of S2S modeling or modeling in general at GFDL and when I introduce the third generation, I don't mean that I've skipped the first or second generation. The first generation was the legendary models developed by people like Suki Minabi and Kiko Miyakota that gave rise to coupled climate modeling to medium range weather forecasting and so on. The second generation was the Steve Klein era of a CMIP type modeling led by the CM2.1 model later, CM2.5, CM3, high ramming floor. And the way that GFDL got into S2S prediction was that we took these excellent climate models been developed at GFDL, CM2.1, CM2.5, which were the best and by a hair the second best in CM2.3 and five respectively. So I wanted to start pushing them into the S2S range. And the way that we did that is you first started with emphasizing the role of tropical convection and tropical cyclones in these models. And to do that, two things were done. One is to increase the resolutions of these models, both in the atmosphere and in the coupled system. And the other was to introduce a new convection scheme, the UW GFDL double plume convection that does an excellent job simulating tropical convection, tropical variability and hurricanes. So two models were developed to this purpose in that generation. And one is a high ram, which is a one being a 25 kilometer non hydrostatic atmosphere climate model in which you specify the SSTs. But the way we do that is we specify the SST climatology, which evolves with time plus frozen SST anomalies that are held static with time. And that actually had a lot of prediction potential. You can get a lot of the value of seasonal prediction, especially for, especially in the tropics, and in the summertime when we have we have a peak hurricane season in the north Atlantic, without needing to couple to a dynamic ocean. We also explored some new capabilities within using high ram I mentioned the non hydrostatic dynamics and then beyond that we were able to leverage the powerful variable resolution capabilities are nesting and stretching capabilities to go to even higher resolutions within the other approach that I want to mention is a floor double plume. This is a variant of the of the floor model, which is a 50 kilometer atmosphere one 100 kilometer ocean and two atmosphere plus mom five ocean. This did a fantastic job of predicting hurricanes, especially intense hurricanes would increase 25 kilometer atmosphere resolution. And I'm going to talk about some S2S results from these two models to start out with. One thing I do want to mention is that all the models I'm going to discuss all the gfdl models to use the same dynamical core fv3 very unified around one atmosphere dynamical core. And all the couple models I'm going to discuss use the modular ocean model mom either five or six, and the fms couple which has a lot of a lot of neat features which are unfortunately beyond the scope of my talk. This essentially is the first steps towards having a seamless prediction system all the way from short term weather prediction all the way up to millennial timescale climate simulation. So here's a couple of examples isn't with the eight kilometer high ram this is the with a nested grid in the eight kilometer eight kilometer nested grid over the north Atlantic. And we are able to get some pretty interesting results from our climate simulations of S2S monthly predictions. As you can see here on the left. This is a plot of observed radii of maximum winds from tropical cyclones kind of a measure of where the size of the maximum winds and in a hurricane. And you can see that the distribution that we have in the eight kilometer nest matches the observations very well perhaps maybe a little bit too small compared to the observed hurricanes. What's more interesting is that we are able to take a look at the climatology of rapid intensification, which is something that's been very hard for a lot of models to simulate. And indeed we were able to capture the right climatology of rapid intensification events where you have maximum wind speed increasing by more than 15 meters per second over 24 hours. And it's not possible in a course resolution model at a 25 kilometer model. So this shows some of the really neat advantage of going to increasingly high resolution. Now you can talk about well what can you do with that higher resolution. And so this is higher at eight kilometer nest in higher and this is a two way nest with the global domain. And we were able to find that you can putting in that two kilometer nest, or excuse me that eight kilometer nest here on the right, was able to predict monthly accumulated cyclone energy measure how strong your hurricanes are, and major hurricane accumulated energy with the correlation that matches that what we what was considered what is considered good for most models prediction of just seasonal hurricane count in the in a base. So this kind of gives us a clue about what we can do with increasingly high resolution with variable resolution with some of the new modeling capabilities are in the next generation of models. Now we can shift gears a bit we can take a look at a couple climate prediction system floor DP here. And one of the things we want to take a look at is a week five temperature predictions. So we can predict a normally four and five. This is a S to S time scales, and take a look to see how this couple kind of model does and indeed we were able to get skillful predictions shown here in the dots of temperature anomalies during the winter time in weeks three, four and in some places out to week five. Furthermore, we are able to find that a lot of the prediction skill from this came from reconstruct came from the our skill at predicting three modes in particular, the mode of variability associated with and so one with the North one's with the site Eurasian Merdeo dipole mode. So that's, that's where a lot of the skill for these are temperature anomalies is coming from it's being able to predict these large scale modes that we've heard heard a lot about during this colloquium. One thing you don't see here quite yet is the MJO which turned out to at least in this model didn't have a whole lot of impact on predicting these temperature anomalies but does have use and other domains as I'll show a little bit later on. We can start thinking about how we can predict the MJO and its impacts as well. And the question remain one of the big questions is what does it take to get a good MJO forecast so in C met five a lot of models did very poorly with MJO predictions these are models are developed in the late 20 in the late 2000s. And indeed, the GFL suite of models was one of those CM 2.5 and CM three models didn't do very well for MJOs. But over time, we continue to develop our models and in and for which is the, which in this case I'm showing a prototype of what became the C met six the current generation GFL climate model. We found that if you just specify SSTs you do not indeed good a good MJO. But when you couple to a mom six ocean even here at 100 kilometers, you get a, you get an actual propagating MJO. It's not as strong as that we see in observations but you do indeed get a useful and jail simulation they can use for climates for climate studies and in particular changes under climate change. So, and it's really ocean coupling that makes a difference as long as you have an atmosphere model that has good enough, especially things like a convective parameterizations to support that. Now, from, again, you can go talk about, well, we can predict the MJO so what what's the value of that. And one interesting thing about the MJO that we follow a lot in the summertime is that the active phase the MJO, there's a certain set of phases of the MJO, under which hurricanes and tropical cyclones are more common in the Arctic than during this inactive phase of the MJO. And observations see this pretty clearly we average over a large number of seasons, but no model had been able to pick this up the best my knowledge until we used high Ram 25 kilometer high Ram. And we ran a large number of years and indeed we were able to reproduce this MJO tropical cyclone link. And this could be a very important thing towards getting good sub seasonal predictions of the MJO, and of its impacts. That was that was then this is now so we can proceed into our fourth or current generation of S to S predictions. And I'm going to discuss two models for doing this to our two prediction models the spear and shield prediction models that we've developed in the recent years at GFDL. So spear is a part of our GFDL seamless modeling speed spear is itself it's a seamless system for seasonal decadal prediction. GFDL is the NOAA research or system model as said in the most recent implementation plan and is being used as the UFS is decadal to Centennial application. You'll extend the unified forecast system that the National Weather Service is developing with OAR as community partners into the climate realm. And spear comes in a couple of different flavors. One is the spear medium which has a 50 kilometer aim for atmosphere with the new convection scheme and some other cool new stuff in it. This is a model that's run to a 100 kilometer mom six ocean the current generation of modular ocean model. And this is a very efficient model yet we'll we'll show that it has very strong prediction capabilities. This model is running real time it's submitted to the national to the North American multi model ensemble that's run at the climate prediction center. There's also a 50 kilometer spear large ensemble it's very useful for climate variability studies. So we can go through our website to learn about all this. We also see that it has very good variability and a very good mean climate state which means it gives us an idea that it should be good at predicting the MGO. So we decided to take the system, and we started to apply it towards S to S predictions. And we're we've done a number of 45 we are so the person leading us about change on. He's done 45 day forecast every five days or 20 year time period, we're focusing on the winter time period from the winter seasons here. We're using the atmosphere and SST to merit to we're using only a 10 member ensemble here, which is relatively small but we do already see very good results from that. And here's an example of those good results. This is the MGO prediction skill here. And again this is November through April forecast every five days over 20 years and we calculate skill the usual ways in the RMM index and we take a look at the anomaly correlation coefficient here. And what we find is that the skill is actually quite good. If you especially we take a look at the full ensemble the full ensemble gets you 18 days of good predictability ACC above point seven and 30 days of useful predictability ACC of point five. And indeed we find that actually this is improved for fast propagating events that there's some opportunistic forecast you can make in which the MGO predictions from this ensemble get even better. And one of the first things I want to point out is that the ensemble does get you extra predictability over the over the individual ensemble members if you just want to ran one on one member a deterministic forecast you'd only get 23 days of forecast skill. The second thing is want to say is that 30 days again is better than all the models that have been previously investigated except for the European Center however the European Center is only a couple of days better. So, so you're, I don't, I don't mean to spares the European Center or any of the other models which are all fantastic frankly and the ability for climate models to improve their, their MGO prediction skill without needing to do weird things like super parameterization or three kilometer resolution everywhere. I think it's one of the great success stories of climate science over the last decade and one of the real. What one of the real things that the climate modeling community can be very proud of both here in the United States and worldwide. What I do see is that the amplitude of this, the amplitude of the MGO does decrease faster than reserved. We have some reason to believe from our later this later simulations I show you this, this is perhaps maybe a resolution issue or still at 50 kilometer resolution. The other thing you can see is that the spread here the spread is still not as high as the root mean square error which still, which indicates that we're still maybe under dispersive in this ensemble. And the MGO itself is a wonderful thing to predict what can you do with it is what really counts. You can compare the teleconnections for all the different flavors of the MGO here. A couple of different flavor flavors of types of NGOs have been identified, and they all have different predictabilities. So in particular, we find that some of these modes are more predictable than others. And that the fast MGO it's already the MGO itself is more predictable and the teleconnections which are particularly important. I live right here in New Jersey. And this is indeed we're a good, we're a lot of a lot of people in the United States to live that that this is one of the most valuable ones to predict and indeed we are able to do a good job predicting this and his teleconnections. And I think the name is true for the jumping MGO here, which has more prediction skill both on the east coast and the west coast of the United States, as well as these ecologically sensitive areas in the Gulf of Alaska. The standing and slow MGOs are a bit more difficult to predict their teleconnections. A big reason for that is just the fact that those MGOs particularly the standing MGO, it has been shown to be more difficult to predict spear. And a very good sign is very good progress and we can actually capture not just the, not just the MGO itself but what it's significance is for those of us living here in North America and, and elsewhere as well. The other modeling system I want to discuss is shield which stands for the system for high resolution prediction on earth to local domains. This is more of a weather model that we've developed that grew out of the energy GPS project that we're working on with the weather service and again the broader community. So we'll hear from more information about shield as configurations including real time forecasts and links to our papers that we've written on shield shield is designed as a unified system for whether to sub seasonal prediction that heavily utilize the non hydrostatic dynamics and the variable resolution capabilities within FD three. And the way that I like to say is that this is a UFS implementation that is one code one executable one workflow you can do all these different applications from the same code base. And this is a container and submitted a paper to Geo geoscientific model development that describes this container. The configuration I'm going to talk about right now is a called a shield, which uses a 25 kilometer atmosphere with weather model physics. I want to particularly point out the mixed layer ocean, which is a much simpler ocean than the fully dynamic ocean so it's, it's very cheap it's a few lines of code, essentially. This is not just climatology plus those frozen anomalies again. And we'll actually find that just having a mixed layer ocean itself can be very, very powerful thing. And so two results from in particular want to point out from a shield, I want to point out the diurnal cycle of surface precipitation in a shield. And in particular focusing on the warm season diurnal cycle here, we can probably against our 13 kilometer shield which is our flagship shield configuration we use for medium range forecast that models kind of like GFS. And what we find is that this 25 kilometer shield gets the gets the phase of precipitation both over tropical land and northern hemisphere land and specifically over the United States, it gets that phase just about right. And this is actually a superior diurnal cycle to all the other senior five models. And you can see that its amplitude is it's a little bit low, especially over here in the United States where a lot of that warm season, a lot of that warm season effect comes from comes from convective mesoscale convective systems that are hard to resolve. That is rectified by going to 13 kilometer resolution, in which we fully pick up the right phase and amplitude of the managerial isolation. And one other thing. I don't mean to belabor the point, but there's two things about this one is that we believe that the, the weather model physics weather models are evaluated on partially on their six hour prediction precipitation prediction skill if you mess up your diurnal cycle, you're going to mess up your forecast. And that also the good diurnal cycle we found can prove our MJO prediction and our intern has actually done some interesting work on that Stella Heflin off no intern has done some interesting work on that this summer. One of the big things we find is that by simply having a mixed layer ocean, instead of specified SSDs we extend our MJO prediction skill by eight days. And this is a pretty major result in this case this is a very simple change but it results in much better prediction of the Madden Julian oscillation. Okay, and now I want to discuss a bit about a convective scale S2S and exciting new possibility here at GFDL that we're using with our with our shield configurations. One is to use, one is to put a nested grid over the Madden over the area of the Madden Julian oscillation over the maritime continent. This is actually suggested by Tim Palmer to me at European Center workshop a couple of years ago. He said that if I wanted to improve precipitate, if I wanted to improve S2S prediction over the United States, instead of putting a nest over the United States and put over the maritime continent, improve the MJO, and as you're going to see, we can't presumably improve his teleconnections as well. And so when we did that we took a 16 kilometer global domain, put in a four kilometer nest, and we found that we actually during the dynamo period for cases that began in phases three and four which is where, just before it enters the maritime continent where these models had the biggest trouble, simulating the MJO. We find that in this very difficult case putting in that nested grid extends useful predictability out to 39 days, which is a pretty neat result in this case. And this is all done by relatively efficient configuration here you can get 40 days and eight hours on 4000 cores using a relatively old supercomputer. We did find that there are some challenges of phases six and seven. Once they get into the Western Pacific, both the global domain and the nested domain has some trouble propagating the MJO correctly through there. We have actually a solution for this problem. So we're going to be continuing to develop this model. Another cool, another cool thing we can do is put a nested grid over the over the continental United States and look at precipitation systems, particularly severe weather. This is a five kilometer nest put over the continental United States. This is a, what we call a sea shield for continental United States. Again, this is a very efficient model. And we find that the dino cycle of phase and amplitude in the for these warm season forecast these springtime forecast is just right. The amplitude and phase are correct of precipitation of these models every day. We find there's a dry bias that is developing later weeks. Once again, we do have a solution for this we've improved our model and in this past year to improve some of our surface biases we've seen some pretty nice improvements from that in our short range are five day severe storms forecast that three kilometer resolution. We have found that by predicting severe storms by looking at their proxies in terms of rotating updrafts. So at five kilometers you can resolve rotating updraft and see shield and also in other FD3 based models we've found by looking at the anomalies in that updraft helicity that severe storms signature we find that there are there is skill at predicting severe weather outbreaks on on a sub seasonal timescales out to week four, especially in parts of the nation, especially say up here in the northern plains, where severe weather isn't as common this time of year and it's not in predicting predicting anomalies and severe weather activity is most valuable and predicting severe weather in the southeast United States and the southern plains. Basically, if you give a forecast that there's going to be severe weather it's going to be a little harder to improve on that. And finally I want to briefly discuss our global cloud resolving model called the X shield NASA and JFL we call three base GCM since we've contributed to both phases of diamond we finally get we finally get excellent tropical cyclones and penetrative convection and these models. There's a very fast global cloud resolving model of the fastest non hydrostatic global cloud resolving model and diamond phase one 20 days per day with 14,000 cores. And one question I keep coming into is what exactly is a good purpose for these global cloud resolving models beyond them being a nice tech demo and a preview what what medium range forecasting is going to look like in the future. And we can use it to study how convection resolve convection interacts with large scales. And this is kind of unique problem because in predict traditional global models, we parameterize convection those these tuning parameters so you can always tune them to get a good result. It's a little bit harder to do when you're specifically simulating convection and you get this updraft that is forming itself. So here's an example of updraft within within X shield here is a to delta X updraft cross section through it. And you see example here of transporting a trouble planetary boundary boundary layer air into the free troposphere and the tropics which is then and trained into the higher latitudes through the Hadley cell. So I'm running out of time so unfortunately I have to skip this and and also unfortunately our, our community grid tools effort GT for Pi to pour FB three and the and the UFS GFS physics into into this grid into this new new sort of atomic language to be able to compile it in terms of a large number of computing platforms such as GPUs, and then whatever comes next after GPU is or even the new versions of GPUs that seem to be introduced every week. And finally, a few thoughts about S to S prediction. We talked a lot about how there's a bit of a gap between weather simulated weather between weather forecasting climate modeling that S to S fits into unifying models something that's being done a lot here within NASA and Noah, within Noah here and also by partners at NASA. That can fill in the S to S gap but it's a hard scientific problem and it's not just engineering problem which is stick your models together like Legos. And to really take advantage of this unification. I think a broader view of the earth system is really necessary given how everything tends to get knitted together on these longer time scales. You're not just blending together all these different systems. You're also talking about how the variability interacts with one another and how short term simulation skill how prediction is short what things we typically predict on very short ranges. What happens to them on longer time scales and I think that the kind of reductionist approach is studying each individual phenomenon in isolation, or a particular component or even developing a particular component isolation and stitching together isn't really going to work. And that kind of brings me to the next point is that, yeah, you really need to, you really need to take this very holistic idea, and you really need people who are going to think about how to blend everything together pieces in a modeling system must work together. And to wrap up I know I'm running short on time. This is an excellent aspirational goal to try to unify all of our modeling systems and across all these different systems here. We may never get to a truly unified seamless system maybe not. I think I think it is possible but it's yeah it's not assured at this point, but I really think we do need to try at least that sort of grand unification and at the very least trying to do that will get us better models. And that's the real value as whole activity can we make better for weather forecast can we make better climate simulations. And with that, I'll stop there. Thank you so much everybody. Thanks a lot because I was really comprehensive talking. Yeah, thanks again for introducing us to so many different systems. Yeah, thank you so much. Any questions for Lucas. I don't see one on chat yet look as I had one question. Maybe it's more of a philosophical question based on this slide you put up, and also the results you showed with the cloud resolving or convective scale, permitting results you showed from the maritime continent example of improving the NGO prediction scale right. As we go into like the seamless prediction and like the unified system modeling framework. Do you think our development would be more targeted towards like user defined needs like for instance if we want to improve the NGO and NGOs teleconnection, we need to either improve the resolution in the specifics and get that right, but then that goes at the cost of resolving other things like submissive scale in the ocean or other processes in the earth system right so where should the cost versus benefit of resolving or eliminating processes in the full earth system be should it be defined by user needs or should it be defined by specific processes in the earth system. That's a fantastic question and I don't think there is a single good answer to that. But I think that maybe the best solution to that is having a diversity of models and diversity of modeling centers that are working on new ideas and modeling. I like to give up so I mentioned this in a talk with a lot of European guests a few weeks ago and I got a got an interesting reaction. You take a look at the weather at the climate modeling community here in the United States and there's people who be moaned there's too many climate models, there's people say there's eight global, there's eight global models climate models I think that's actually a wonderful thing and if you take a look US climate modeling is the best in the world. Like every, like every CMAP session it's it's NCAR it's GFBL and you know how on top. You see the models being developed by NASA by the Navy by DOE that they're all producing all this wonderful science these wonderful predictions and these wonderful analysis and wonderful tools that are of societal impact. And I really think that it will have to be to some extent it's going to have to be user defined problems but at the same time we need to develop those individual processes as well. And having this diversity of models here in the United States. That is really our great strength it's really diversity that drives the field forward. In fact you can compare that against what's what's happened here to regional regional weather models in the United States where everybody's forced into one solution. And yeah, that feels kind of stagnated over the last two decades. Thanks Lucas. Chidong you have a question. And then Judith up to Chidong would you like to unmute and ask you know that. Yeah. Yes, can I hear me. Yeah. Yeah, thank you Lucas for the presentation I just wonder. Well, there's a lot of a push to the global cloud resolving model. And I know there's some at some time, there are also attended to have the adaptive cloud cloud resolving great global model. Can you comment on the pros and cons of the two approaches. So by adaptive do you mean like a grid that dynamically refines and be refined itself during a simulation or. Yeah, in time and space. Yes. Okay, so that's actually that is again another interesting question. One thing about adaptive mesh refinement which has been very popular in the computational fluid dynamics community. It's been difficult to get a lot of these approaches to work in the atmosphere because, and my understanding is that the big problem is what exactly are what what's your refinement criteria. I think that worldwide there won't be just one refinement criteria, or refinement criteria that works for topical cyclones say probably won't work for heavy rainfall in the latitudes. I mean, there are some approaches that do work quite well, especially the moving nests that are that were in fact invented here at GFBL that's used in hwarf and some of the hurricane models. Overall these are the variable resolution approaches I think they're very nice ways to get started towards going down to very high resolution at relatively low computational cost. But they do add complexity to your modeling system. And on top of that, a lot of, we do run into some problems where we have this issue about how do you communicate from your high resolution, high resolution region out to your global domain. And that's not necessarily guaranteed, especially if you have, if you have a parameterizations that are fixed around certain parts, certain resolutions and scale awareness does help a bit but it's not a panacea for all these problems. So, but however I have to just say that I really do like the idea of variable resolution and indeed the people who are working on adaptive adaptive grid refinement I really like the fact that people are still working on that because it could be a very powerful tool in the coming years. So, thank you. Thank you for that excellent question. Thanks. Thanks. Thanks, Lucas, Judith. Thanks for your talk. I would be so you showed that the mdo spread was under dispersive and I was wondering if you comment a little bit on the model error schemes you are planning to use in this different model configurations. Okay, thank you for that question. So, and so in spirit to us we're using the SPPT scheme that was implemented in the GFS. And we did find that that did help improve the skill of the mjo by a decent amount. So it did take some work to get that to give the best results but we did see that did help improve the ensemble spread a bit and the prediction skill. What we actually found is that it helped improve our hurricane intensity as well which is a little bit a little bit counterintuitive. Yeah, yeah, I don't recall the details exactly that but we did find that it was it did did help improve our simulation tropical cyclones, which is, again, kind of a counterintuitive thing. I want to look into other approaches. The PSL cellular automaton of of perturbing the parameters within us within the convection scheme particularly some of these things that we don't have a good understanding out so perturbing the tendencies. It's a little bit of an ad hoc thing because we, it kind of breaks energy conservation some extent, you actually do have a good understanding of how the atmosphere is heated even if the processes leading that are poorly understood, whereas at the same time things like the entrainment rates and a deep convicted plume or the, the mass flux, those things are poorly understood. Those are really the things that you do want to perturb and the cellular automaton approach being pioneered there at PSL. I think it's a fantastic idea and want to be able to introduce that within shield and sphere. Okay, so thank you so much. That's a fantastic question. Thanks for that. Thanks, Lucas. So, yeah, we do have some more time rich Neil and due to technical difficulties is unable to join us so you can take one more last question for you look as if that's okay and then we move to the student presentation. Andy, would you like to unmute and share. Yeah, really interesting exciting work thanks great presentation to and I like hearing you discuss this seamless system concept you know it's been sort of bandied about for a long time but it seems like having some more pragmatic comments on how it might actually be so it's really possible in that regard is, is I think really nice to see but actually that wasn't my question I just want to come on that but I'm wondering you had you mentioned the thought the ensembles were kind of small, you know resolves signal noise and could see the benefit in the ensemble mean over individual members. How far could you push with larger ensembles like what would be feasible and what do you think the potential benefits would be of, of increasing ensemble sizes. That's a good question. So there's two points that one about the the seamless system I have to I have to admit that I use seamless and three different senses in my talk. And from all the way from weather and climate seamless this in a single modeling system seamlessness between different slightly different modeling systems that are in the same framework and so on. So I have to apologize for doing what is kind of a dodge in that sense and seamless is kind of an undefined concept in my opinion. But you raise an excellent point about about using larger ensembles. And we found that we can get a good result with 10 ensemble members. There's nothing stopping us from using more ensembles so like for the spear large ensemble. There's a 30, there's a 30 member ensemble there, it's been released to the public. And those are multi decadal simulations in that case. The, the exact ensemble size I know that there's a lot of work that could be done to say what the ideal size for a simulation is whether it be the data simulation or whether it's for actually producing the prediction ensemble. Actually, I think that maybe having a bigger ensemble could be a useful thing when you have extra computing time. But at the same time, I mean if you had like a million ensembles of 50 kilometer model you can still not simulate severe thunderstorms. So there will always be value going to increasingly high resolution to resolve new phenomenon, or trying new approaches in your model, the variable resolutions one trying different physics out which is what we've been doing here at GFPL with the weather and climate model physics. That's another possibility. Like I said, I think a diversity approaches is important. I mean like what goes on at UKMO and at European Center they've done fantastic jobs developing ensembles. There's been really great work going on here in the United States. So, there, I really think that I don't think there's a once again there's a well defined answer to that I think both approaches do need to be followed and I think you do think we have a big enough community to do that. Oh, thanks. Great. Thank you. Thanks, Andy. Thanks again, Lucas. Great talk and a great discussion as well. Wonderful. Thank you everybody. Thanks. Thanks for inviting me. Thanks.