 Hi Brad, I think we can get started whenever you're ready. Okay, excellent. Thank you. So hello everyone. I want to welcome all of you to today's BASC session entitled Advancing Subseasonal to Seasonal Forecasting. I'm Brad Coleman, an atmospheric scientist and a member of BASC. I'm also currently serving as president of the American Meteorological Society. I had the opportunity to serve on the team that designed this session and we had a lot of fun putting it together and we hope that we'll have a fascinating discussion today on recent and future gains and challenges in the S2S arena. We owe a debt of gratitude to the speakers and panelists who have agreed to join us today and I want to take time to thank all of them and I also wanted to thank the excellent National Academy staff. If we move to the next slide please. BASC is the Board of Atmospheric Sciences and Climate. It is part of the National Academies of Sciences. It is one of 11 boards within the Division of Earth and Life Studies and at BASC we have a very diverse portfolio across weather, water, climate, and earth system science in general. When we look at that diverse portfolio several particularly relevant interests include the three here, advancing relevant methodologies and technologies, enhancing structure and operations to optimize all components of the observing and forecasting system and maintaining a focus of the scientific aspects of our mission towards societal impacts. The topic of S2S is not new to BASC. Most recently in 2016 a consensus study was completed that provided a rather comprehensive research agenda in this area. On top of overarching that research agenda very notable goal or challenge from the 2016 report is shown in bold. Getting S2S forecast to be as widely used within 10 years as weather forecasts. Next slide please. It's been almost 10 years since the report so almost that 10 years has almost gone by and we certainly feel it is time to reflect on the report and think about whether we are doing another national academy's activity or whether we should at BASC help support in other ways the S2S progress. These three bullets of interest here are really what we saw is sort of primary. Okay. What we wanted to focus on in today's S2S session. First considering how S2S forecast be more useful. This is important across BASC in general and particularly it was important in the study you see here, the 2016 study in paying attention to both the scientific side but also the user side and how can we make that information more relevant to the broader community and society in general. Since the 2016 report certainly a lot more attention and energy and excitement is now set on data driven methodologies so we want to really focus on that aspect of it as well. And then finally you know in that sort of seven day years since the study typical trend in weather forecast improvements in skill we've probably added one day in skill in the weather forecasts. So how are we doing on the S2S side and also how do we look you know when we look at these two aspects seasonal sub seasonal forecasting and weather forecasting they're beginning to blur and blend in the middle and how can we address that and we have a session or a panel on that as well. If we can now move one more slide here I believe. This is what the agenda looks like very quickly because we'll be introducing each panel separately as we move through the three plus hour agenda. After my welcome we're very fortunate to have Andy Brown the director of research at the European Center for Medium Range Weather Forecasting and also very importantly to point out he was a committee member for the 2016 report. So he'll be setting that stage for us and I'll talk a little bit about his topics but you know an excellent way to start out the the session this afternoon. And then we'll move into the session on uncertainty. Mary Glackin will be the moderator there. When you start talking again scientific you know uncertainty is so critical as we set our scientific agendas but also so important when we start looking at value and societal impacts how do we manage how do we reduce uncertainty and how we better understand uncertainty so that we can ring the right value the right products the right terms to the decision makers and stakeholders. We'll have a short break at 250 for 10 minutes and then moving to the next slide. At that point we'll move on to the a session moderated by Amy McGovern that will be on the emerging role of data-driven methods for us to ask the AI the MLDL aspects and what is the promise what are the challenges that we should see there and then we'll take one more break and then we'll close out with a third session which is really not only bringing sort of the afternoon to a close but really talking about how do we bring together climate and weather for S2S forecasting and we have an excellent panel there as well and I'll be moderating that and good good opportunity for discussion throughout we'll be having sort of short uh lightning talks with a lot of operations a lot of opportunity and Q&A with the audience so I think now we go to the next slide please a few logistics this is totally virtual so all of you are out there uh two-dimensionally and we ask first off please keep your microphones muted we know there are several hundred people who registered for this and we want to make sure that we keep any interruptions to a minimum in each case for each panel this afternoon the speakers will go first they'll share their thoughts in some cases they'll have slides and they'll be sharing their own slides so there will be some transitions in there and then we'll open it up for discussion and Q&A moderated by the BASC members I pointed out there are a few ways that you can interact with us during this process first if you have a question during the presentation go ahead and type it into the zoom chat window you can type in there all of you should have access to that and we have staff who will be monitoring that and we'll be sorting and combining and looking at how we can best handle those questions and then once we get into the Q&A part where we're actually listening to the the panelists answer and respond to questions we you can use the raise your hand feature and again we'll be we have staff looking at this do you want to point out that again it's a large audience several hundred of you we will be prioritizing BASC board you know BASC members in general over others but we hope to get to everyone and again if you see lots of raised hands make sure you get your question in chat and we can try to get to there as well final logistical issue is if you have any concerns or if you can't raise a hand or enter at least send an email to reader gas can reach a Gaskins at our Gaskins at nas.edu okay and now I think we're ready for the last slide here which is going to be oh and then right here a national academies of sciences is committed to principles of adversity integrity civility and respect in all of our activities and it's important that all participate all participants participate fully in this with us we want to make sure this atmosphere is one that's free of harassment discrimination on all any identifying base factors you can see several references there as well next slide please so without further ado let's move into it again I mentioned we're very fortunate to have Andy joining us from the UK from the European center again a past panelist he is really going to we wanted to have take this first time to do a little level set maybe some terminology you know what do we mean when we talk about SQS forecasts what what did they mean then versus what we're talking about now as this landscape has changed uh Andy will also be giving us an overview of the 2016 report from the eyes of someone who is there and now has been actively involved in this process for the last seven or so years and then reflecting on several questions that we ask him to look at as far as we move ahead really setting stage for the following few sessions so I think that Andy if you're ready you can go ahead and start sharing your slides and excited to get it going thank you very much okay thank you very much I think my shared slides are good to be shared for me I think that's the plan okay no thanks very much and thank you very much for the opportunity to speak today as Brad said I'm director of research at ECNWF although six years ago when we were doing this report I was then the director of science at the UK Met Office so I've changed hats in the meantime can I have the next slide please so as Brad said I was asked just to give us sort of reflections on what we said in the report so obviously the first thing I had to do was a bit of revision to try and remember what we said in the said in the report but if you have a next slide please and that's just yeah a list of the culpable the 15 or so of us sort of mainly from the US but then High Lynn from Environment Canada me from the other side the Atlantic and we had yeah many meetings and wide engagement with very many people beyond the committee have next slide please so just a step back as Brad said I'm sure it's familiar to most people but there are different definitions of where sub-seasonal starts and where seasonal ends but for the purposes of this report we were taking sub-seasonal for two weeks to 12-week forecasts and seasonal the three months out to a year timescale yeah you can argue about the exact lengths but I think that the scientific issues and challenges we were covering are sort of not sensitive to that finer detail next slide please yeah so I think Brad said this I noticed we came up with the rather ambitious sort of maybe aspirational vision that S2S forecast will be as widely used a decade from now as where the forecasts are today I don't think I'm going to try and claim we got there and maybe it's never really realistic but I think really the sentiment behind that is that there's already huge value there's the potential for a lot more value and as a community of scientists and users we wanted to sort of push the research agenda that's going to get us as far as we can in that direction so okay the next slide please yes so this was out of the slides shown when the report was presented when we completed it and there were sort of four broad headings maybe if you just click on one time please oh well the sort of the sort of the blue area there so three of the headings increase S2S forecast scale improve prediction of disruptive events and include more earth system components I guess those all really come under the category of we want more accurate forecasts we want more accurate forecasts particularly of disruptive and major events and including more earth system components if you like is a tactic either to make forecasts better or to be able to give sub-seasonal forecasts of extra components hydrology or sea ice so all different types of you know better forecasts and if you click on again so the other theme though this wasn't just you know it's not just a science push that's needed it's you know these forecasts need to be useful to somebody so you know various recommendations on which I sort of lumped under sort of used and useful forecasts so a science end but very much an application end of the research agenda as well of the next slide please so there were 16 detailed recommendations you can see obviously see them yourselves if you get the report I did think of listing them all but it was going to get a bit dry so I just tried to sort of give a flavor of the sorts of areas these recommendations were covered covering and the first two as I said the user engagement the social science of you know what makes a forecast useful used interpreted in the right way so various recommendations on that sort of first theme of the application of these forecasts on the science side probably as you'd expect various recommendations on models the importance of reducing errors what observations do we need for sub seasonal seasonal prediction the importance of data data assimilation both atmosphere and other earth system components and the ideas of bringing in you know the effect of the ocean the memory of the land surface the memory of snow cover so extra earth system components to give us more predictability and then in terms of the actual forecast systems themselves the various recommendations relating to multimodal systems verification and understanding forecasts of opportunity you know some conditions we can forecast confidently further out than others research to operations and also various issues around infrastructure technology and the workforce issues with this sitting on the sort of the bridge between weather and climate the next slide please I thought I briefly just mentioned the sort of the broader international context that I think this report sat in so this is a slide on the wwrp to wcrp sub seasonal seasonal prediction project so Andrew Robertson who I believe is on this call was the co-chair of this whole global effort alongside Frederick Vitar from ECMWF so this ran from 2013 to just just finished this year I thought it's quite interesting just looking at the bullet points on this slide summarizing that project it talks about improving forecast skill with an emphasis on high impact weather events there's a bullet on uptake by operations and exploitation by the applications community so I think those two bullets are extremely consistent with the research agenda of making these S2S forecasts better and getting them used there was a conscious effort in this S2S project on bringing together the weather and climate community which ties in with the session there is in this meeting today on weather and climate the panel and also another of the achievements of this project was the big S2S database which was a database of the major global centers in this space then that's a major facility for research and I'm sure in this meeting we'll hear in particular about the sort of the US multimodal efforts as well so I thought that was sort of reassuring that the sort of you know the broad sort of WMO driven research agenda I think is very consistent with what the 2016 report came up with next slide please just got a couple of slides sort of are we are we are we getting somewhere so this first one is ECMWF examples but just because they were the easiest for me to grab hold of so the plot on the bottom left is just showing the length of time ahead that we can forecast the Madden Julian oscillation so one the major modes of variability in the tropics it's one the major sources of predictability for sub-seasonal time scales so we've got year along the bottom running for about a 20 year period and you know if the lines are going up then our forecasts are getting better over time and so the reassuring news is like if I showed a plot from medium range NWP we can show that these sub-seasonal predictions of MJO predictability are getting significantly better you know three in four weeks ahead we can do a decent forecast of the MJO and if you look at the top right that's just another example that's three week three forecasts of two meter temperature in the northern extra tropics and again on this line this is a probabilistic skill score but if it's going up we're getting better and I think again you'd agree that over a sort of 10 or 15 year period there's been significant improvements so I think that's reassuring that the sort of the research directions that you know we at Eastern WF the entirely consistent with the broader international community and the agenda and the report I think we are getting better go to the next slide please this is just one other example that came out of the S2S project and I chose this one just as an example of looking at forecasts that are more directly user relevant you know not just using some traditional meteorological measure but this is looking at you know how accurate we can predict the edge of Arctic sea ice you know that might be very important for ship routing so in this plot because it's an error measure down is good and what this is showing is as function of lead time you know how good different models are predicting the sea ice and you can see the the best forecast there you know there is you know a skill horizon of up to a month for the edge of the Arctic sea ice so both you know we've got skill in these systems and this is just one one example of hundreds I'm sure of you know trying to apply it to user relevant problems the next slide please oh just a quick plug having mentioned the WMO sub seasonal to seasonal project there's the follow on project now called SAGE looking at sub seasonal applications for agriculture and the environment I won't go into the details of this but again you can see for two specific sectors agriculture environment it's again it's decided that very much the same agenda when and why do forecasts have skill how do we communicate it how do we get this decision making so up to now I think I'm concluding you know the report was set it setting the right directions I still think they make sense now and I think you know as a global community we are making progress along these various axes not enough we're certainly not as used as weather forecasts but we're getting better so next slide please so I was also asked to talk about well is there anything missing from the report so this is the elephant in the room and the very obvious elephant in the room and again I think consistent with the structure of this meeting the rise of data-driven forecasting machine learning I don't think when we were thinking about this seven years ago I don't think anybody was anticipating the the the speed of developments here I won't show a lot on this but I just got a hand from those slides which hopefully will sort of motivate and lead into the more detailed session coming later this evening so next slide please so this is just looking at medium range weather forecasting so I mean two two years ago ECMWF had a strategy of we're not we're not doing data-driven forecasting we think the traditional methods will definitely win through this is just some of the major sort of breakthroughs has been by various groups around the world you know in the space of the last year or two the rate to progress in this field has been astronomical it's a mistake having slides like this because you have to update them about every three weeks but the latest breakthrough so I've got the next slide please so I'm still on medium range forecasting here so on it from left to right day one to day 10 forecasts and up and down just how accurate we are at this is geopotential height so how accurate we are at forecasting you know synoptic systems in the northern hemisphere so up is good the red line is the IFS the ECMWF deterministic forecasting system which I think I can honestly say is that is the most accurate in the world the two lines that are higher up than the IFS are the deep deep mines graphcast model and the ECMWF data-driven model that we've developed in the last year so yeah it's it's quite dramatic so on medium range forecasting you can still ask questions about what variables and does it do high enough resolution but you know it seems inconceivable for medium range forecasting that data-driven modeling doesn't isn't part of the solution to how we as a community forecast the weather have the next slide or just click on and so yeah the obvious question is well what's the application for sub-seasonal to seasonal prediction and you can think of maybe different levels of radicalness and one is using machine learning to do cleverer post-processing or calibration of traditional models you can do hybrid models where you do do a bit of a mix and match and replace parts of the models or as I've just been showing for the medium range you could completely replace the model I think I've just got two examples so next slide please so this was a couple of years ago the WMO set up an S2S machine learning challenge where yeah various groups around the world in some open source way were challenged to can you beat the ECMWF forecasts and lots of groups took the challenge and five groups did beat these in WF operational forecasts I think it's fair to say in this exercise that mainly that was being done by doing clever cleverer machine learning post-processing on top of the traditional models so they were taking the traditional systems and doing better calibration better post-processing so maybe yeah exciting but maybe not radically radically different in fact for the next slide please and if we could try to wrap up in the next minute or so yeah I've got this one more slide yeah so this is just looking at well let's just take the model I showed for medium range prediction and just run it out for longer and see what happens so for the MJO and for the first couple of weeks it's matching the ECMWF system weeks three and four it's not quite the data-driven models aren't quite matching the traditional physics based model but this is first shot out of the box so I think you know there's a really exciting open questions as to what would this plot look like even in a year's time so on the final slide please so yeah so so this was here's the committee's vision again from last time will be as widely used a decade from now as where the forecasts are today I don't think that will be achieved but we should keep happening I think the vision it set out by and large was right and remains right and just with one more click please I'm expecting some so I've just added to what the committee's actually said we should do everything the committee said we should do and we should take advantage of the opportunities presented by machine learning that's me excellent thank you Andy great great start good overview of the meeting and also what's happened in these last seven years including your job change great yeah it's exciting you really covered a lot of the highlights and I think what we were hoping to hear as we move into the three following panels do we have any questions I don't see raised hands if I'm supposed to I don't see them and in chat or again so two ways to get to us we just have a few minutes maybe five minutes or so here before we can move on okay Libby excellent yeah Andy thank you for that great overview um given your role and how you've seen everything unfold how do you see you know the difference between the medium range with the AI models versus the S2S range you showed sort of that elbow in your final slide and you said you know this is really exciting times but do you see those as different problems or just you know tweak a few parameters retrain and we're going to be good to go um I should give the caveat before saying anything that this is you know so fast moving and so new that was really I showed that as an example I certainly I certainly think they will do better I could think of various things that could be done changing the training regime maybe one caveat I should put in in general is the data driven models by and large up to now have been deterministic and you know clearly well even for the medium range but certainly for the more extended and seasonal predictions they clearly need to get to sort of ensembles so there's definitely work that needs to be done I think the further out you go potentially the more challenging it more challenging it is but I don't see any fundamental barrier so I don't my hypothesis would be in five years time I don't think it would be a complete takeover by data driven methods but I would expect operational systems to be some mix and match of the two okay thanks Libby uh Sanjeev you have your question uh yeah yeah this was a great and it's it's really good to see like you emphasize some of the data driven part in the forecasting my question is where does the fundamental science comes into the play if you are using ECM WF data as a kind of input to your model and then yes machine learning as how many like billion to billion of parameters yes it can give you a better prediction but what do we understand from there and by emphasizing too much on kind of a machine learning are we living like we do not need to understand any system can you comment on that no I agree I think there'll be an always be a need for understanding that this becomes even more true as you get out you know through seasonal and into sort of you know climate type questions where the sort of you know the understanding that the hypothesis testing understanding how the real world works really matter even on the if the operational predictions are going to these machine learning models I mean that they are utterly reliant at the moment at least on reanalysis and that reanalysis itself is is absolutely dependent on on physics and data assimilation and so even if we took the extreme position which I don't but fully data driven for operations yeah you need to get the data from somewhere and that data has relied on the traditional understanding here we can maybe take one more it's 11 in west coast so it's the top of the hour a couple questions just on the data you just mentioned Andy about the importance of the data first question was pointing out specifically the data gaps in the southern hemisphere tropical areas and over the oceans and I guess the impact and and on that in the progress here and then also thoughts on solutions if you keep it short though yeah I will move on I mean certainly I mean in terms of the atmosphere we've got tremendously better at understanding how to properly use satellite data if you look at weather forecasts 20 years ago we were massively better at forecasting for the northern hemisphere than the southern hemisphere because we had much more traditional data in the northern hemisphere if you look now there's not much difference between them so I think on purely as an atmospheric problem I'm not so worried because we still need the traditional data but the satellite data alongside the traditional data fills in the gaps probably for sub seasonal to seasonal forecasting where you do get into is the extra challenges of wanting detailed data on snow or in the ocean if you like that the non-atmospheric quantities where you know which are sources of predictability for sub seasonal and seasonal predictions and you can't measure with a satellite so yeah there's been interesting work looking at you know what ocean observations are important in the tropical Pacific and tropical moorings and the like so yes important issues on the data side yeah excellent so I think that and probably throughout the afternoon the chat will continue and if if especially the panelists would kind of monitor if you have the time and and possibly continue or answer some of those questions but it's important that we stay to our agenda so I at this point I'm going to hand it off thank Andy one more time thank you and then move on to the first panel and that will be Mary Glackin moderating so thank you and Andy great job I think that's just what we were envisioning to set the stage here so I'm Mary Glackin I'm the chair of BASC and as was mentioned earlier I'll be moderating this session and the way we're going to run it is we have five speakers here and we're going to hear from them in turn for five minutes each in the order that's presented here and then we'll reserve 25 minutes for the end for Q and A so this session this session is really set up all three sessions play together but this is really underscoring the the usefulness part of this so it's digging into a little bit this issue of improving our understanding and this quantification of uncertainty and so you can see the kind of questions we've offered the speakers shown here and then the actual usefulness of these forecasts I like the way Andy mentioned it's really an aspirational goal that's out there to have these forecasts be as used as the weather forecasts are so without further ado I'm going to turn the mic to Steve Yeager who is going to kick us off in the session uh Steve well thanks for the opportunity to share my thoughts on uncertainty in seasonal prediction I really uh had to grapple with the the vastness of this topic in preparing these slides I thought I'd start off with just some examples to motivate my thinking if we start off up here on the upper left here's an example of a skill map from our seasonal prediction system they call it SMILE it uses the CSM2 fully coupled model and this shows the the precipitation skill for March, April, May when we initialize in November and you can see that our skill for this quantity over the continental United States is quite low it's below 0.5 so we have inherent uncertainty in in producing a forecast of seasonal precipitation from our system associated with this low skill score but this is an aggregate skill over all of our verification years from 1970 to present and there's certainly this idea of forecast of opportunity so some years we might be able to predict this uh precipitation anomaly map more skillfully than others so the the bottom panel shows the skill fraction uh of of the skill shown in the top panel that's coming from large and so years so this is uh looking at years where the DJF Nino 34 index is is outside of the 1.5 sigma uh value so that's roughly 25 percent of our verification years and you can see that our skill for precipitation over California uh more than 90 percent of that comes from 25 percent of years so we have these uh this evidence that in certain years we have an opportunity to make a forecast that might actually uh have higher confidence so this is a this is the sort of target field but um underlying this is predictability of the ocean that we think provides the memory or the forcing for the atmosphere and so I'm showing you here a map of our uh SST skill when we initialize in November as a function forecast month you can see that the skill is high in the tropical Pacific but it degrades its lead time um and you can also see that our extra tropical skill degrades quite a bit and it's unclear to what extent that matters or that adds to the uncertainty and things like our precipitation forecast and this anomaly skill map is is sweeping under the rug SST bias that develops in a coupled model so there's this uncertainty whether the atmospheric response is um um being um perturbed by that that SST bias development so if we think that ENSO is the really key um forcing for the atmosphere then we can look just at ENSO forecast and we have quite high skill for the Nino 34 index on seasonal time scales this is when we initialize in November high skill with correlations above 0.9 and we're getting a forecast for the coming DGF of plus two degrees Celsius as we go further back in time our confidence or uncertainty grows because our skill degrades and you can see that when we initialize this system in May of this year we're predicting a much larger event although the uncertainty range did span our eventual T degree um um forecast but we have some evidence that even at seven month lead for large events we we are able to predict these events um seven months in advance so again this this idea of forecast of opportunity and then if we look even further back when we look at a 19 month lead forecast of Nino 34 um there's non-stationarity so we can see that in the late 70s and early 80s we actually had quite good skill at predicting uh tropical Pacific sea surface temperatures uh more than a year and a half in advance and that skill has degraded in recent decades so there's some non-stationarity in our ability to predict the sst forcing so all of this is just from a single model system if we look at the iri plume that was put out in November of this year you can see that there's quite a bit of intermodel spread in what the forecast is for dgf union 34 the multi-mile mean is hitting two so it's an agreement with our system but clearly a lot of uncertainty coming from different systems yeah steven if you could just wrap up in a minute that'd be great okay sure um so um various sources of seasonal forecast uncertainty some of it on intrinsic associated with atmosphere variability as as Clara desert has written about in her papers but also just lead time and system dependence of our ability to predict forcings uh our lead time and system dependence of skill for other forcings soil moisture sea ice um and then uncertainty in the atmospheric response those forcings and non-stationarity of all of the above so i think um a way that i'm thinking it might be useful to think about this problem is to quantify uncertainty uh in ways that have been done for multi-decadal projections so we can think about the intrinsic uncertainty in our forecast the system uncertainty and then the sst forcing uncertainty as an analogy to the scenario uncertainty from uh multi-decadal projections the analogy isn't perfect because the system enforcing uncertainty here are not independent but it gives us a way to maybe frame the problem and then to think about how can we quantify these different sources of uncertainty in our seasonal forecast and then how can we reduce them so um i'm running out of time so i don't have time to to go through all of these bullet points but um hopefully the slides would be available and um we can we can discuss some further and take questions thanks yeah uh that's great thank you steve um yes put questions in the chat maybe we can start to look at those now and we will have the slides available later so next up is ben curtman ben yeah thank you mary thank you um uh steve um sort of covered a couple of the points i wanted to make um so uh when i started to think about this i i started to think about how i uh categorize uncertainty and how that might be relevant for the conversation here and so the first way i categorize uncertainty is uncertainty that i think is largely resolved there's a still still some more work to be done but it you know largely initial condition uncertainty in this picture on the left hand side is sort of an example of that that you know we have an end so forced you know 500 millibar or 200 millibar pattern and uh we have the noisiness associated with that pattern or or the pna sort of stuff that we think is due to intrinsic variability that's much harder to predict as much less predictability and so it's the destructive and constructive interference of these two modes that lead to uh some predictability and we over north america and we think we have a handle on how how to talk about that kind of uncertainty and the interaction there and there there's still clearly more work to be done but we largely have a good idea of what to do about that the second source of uh uncertainty is uh what i would call sort of partially resolved uncertainty and that's the you know the structural uncertainties in our models are you know our models are different they have parametrized physics that we all know is not great and and you know they do it differently and so there's this large structural uncertainties and in fact the many of the multimodal approaches are trying to try to quantify you know or at least try to isolate that that uncertainty and how that affects forecasts and i'm going to provide a little bit of a you know contextual example of the north american multimodal ensemble and how that's evolved so it's partially resolved in the con you know it's ad hoc so it's just the models that are pulled off the shelf it's very pragmatic there are other techniques to try to resolve this stochastic physics and other approaches but it it's sort of partially resolved we have some good ideas of how to go uh within the limitations of the models that exist today and then the last part of uncertainty that that i think is most interesting and is where the research is going and and i think that's unresolved uncertainty and that's you know fundamental missing physics there are things that are completely missing from our models and and it's quite clear that they have predictability and can lead to better forecast and better quantification of the uncertainty of forecasts uh some of the examples that that you know that i tend to work on or you know westerly wind bursts or uh ocean mesoscale processes and i'll show a couple of examples and some of them are are really obvious limitations of our models and missing physics and some perhaps are are less obvious so just just a you know underscore how multimodal is at least getting at some of that uh you know partially resolved uncertainty this this is a very complicated picture so i apologize for that but if you just focus on the two circles at the top two panels it's a really an apples to apples comparison of you know using a single model ensemble versus a multimodal ensemble of identical sizes and and what you're doing is you're reducing uh overconfidence in the forecast so this is this is really the basis for the multimodal approach and it's it's been quite pragmatic and quite effective um and here's just another example uh you know the question always rises how many models do we need here's an example of using either seven or three models again taken from the end of me project and so the number of models we think is somewhere between three and seven the problem is we just don't know which models in any particular forecast are the best but again you know this within the context of our existing models probing that structural uncertainty in an ad hoc way there's still a lot of work to be done but we have some ways of grasping that we do reasonably well and just just this historical point about how when when models improve they you know the multimodal ensemble improves this is showing the history of NMME this is an anomaly correlation coefficient but it's showing you how well we're predicting global temperatures ssts and the ever elusive precipitation and so we do better with surface temperatures and sst but precipitation remains a remains a big challenge and i would i would argue that that's uh because we're missing many processes and so just just can you wrap up in a minute thank you yeah i can wrap up in a minute and just just as an example uh that i like to show is just looking at how ocean mesoscale eddies impact rainfall uh in in the west a process that is completely missed by the NMME system but if you take a system that has resolved the ocean eddies in it then you capture that that influence that upstream influence of ocean mesoscale processes and the last the last example i'm going to put it and this one's obvious if you want to capture this is this is the forecast climatology from one of the NMME models at at NMME resolution and at higher resolution and if you want to get the snow right i mean even close you have to resolve the topographic features and if you're not doing that you're not going to get rain on snow right you're not going to get water resource management right you're going to make some huge mistakes and so there are missing things that we really need to to do a better job on and so let me just end there that you know there's unresolved uncertainty that we're not we're really not even scratching the surface i think we're barely doing enough uh and i think that has an important feedback on to systematic errors systematic errors systematic errors and i'll say it one more time systematic errors are profoundly important and we're not we're definitely not doing enough to fix those initialization and data simulation of course is important but i i think we're forgetting that there's an initialization problem that's somewhat uh you know the it's not exactly data assimilation it's something other and we need to think about that and this is particularly true for for capturing low frequency variability and then my last point that i i think is of most important we're not providing a framework where we can rapidly transition research in into operations we've tried over and over again we've done many things but we just haven't gotten there yet as we need a framework where ideas can be tested in up models that are used for operational prediction and so that the research committee can influence operational projection immediately and i'll i'll stop there and if someone has a question about this lovely picture on the left i'm more than happy to answer that offline okay i myself could talk about this for a while or ask questions but we're going to move along here uh i will remind the audience to feel free to put questions into the chat next we'll hear from Kathy Pichon hi thank you very much um i kind of took this perspective about uncertainty um to be about skill and understanding where we stand um and so i'm going to talk about lessons learned from the subseasonal experiment um and so i i was trying to think about what have we accomplished in terms of subseasonal prediction um in terms of its skill and and and useful ability to make forecasts and and um i realized that when the support came out we could not robustly quantify how good how what we could do what we could and couldn't do how what is our current skill where do we stand um because we didn't have a good framework for that and now we can robustly provide skill um estimates in terms of deterministic and probabilistic skill and one of the things that really impressed on me that we've learned from um since this report is and you saw example of mjo skill um mjo was the focus of predictability um at time and now we understand as you've seen from the previous two talks as well that we have to consider many sources of predictability to find skill on subseasonal time scales um i'll highlight we now have conus operational products um and we have tools for forecast guidance um and one of the things i think about this report was that it was very user focused um and i feel like at the time it came out we are not really ready to work with users because we didn't understand what we didn't didn't know um very well on some seasonal time scales and now we see evidence of co-production and working um with users so how does um sub x come into play so sub x was the u.s effort um to that really provided this experimental framework infrastructure um that that supported these kinds of accomplishments and ability to assess our skill um and you know make probabilistic multi-model ensemble forecast um and we had seven global ensemble prediction systems it provided 17 years of pre-forecast um and seven years of real-time forecast supporting research to operations and i think the key factor here is that it provided a public database of re-forecast or real-time forecast and that was is used for applications for research for training machine learning and AI models um and just the research impact that having this experimental framework um has you can see that there's more than 200 citations of this paper and so i really want to um highlight what um what did we learn from this and i feel like there's so much work that's been done that it's really hard to pick individual pictures or or examples of specific um projects because there's so much that's come together so i just hit some highlights here um kind of as bullet points um we learned that sub-seasonal prediction is indeed possible um you know we we're not doing it um when this project came out very substantially and now we see it going on robustly um and then i want to highlight the importance of the multi-model ensemble um no single model stands out um in in in the efforts that were done um within um sub-ex we didn't we can't identify one model that's better than all the others um and i think that that's really the key is that that that representing of the uncertainty that um that model uncertainty the the parameterization uncertainty um is really being um handled by our multi-model ensemble the thing that i'll also highlight again still varies due to a wide variety of sources of predictability many different projects studying different sources of predictability and that those sources of predictability have to multiple ones kind of come together to give us um a signal in in a small signal large noise situation that we have on sub-seasonal time scales um kind of in a more um technical sense sub-seasonal is more demanding than seasonal the compute and the data um uh infrastructure demands are are are not trivial um we found in sub-ex that there's a demand for sub-seasonal forecast beyond just the operational products that there are users who want to take this data and use it for specific purposes and so now that we understand our skills we understand um as you've seen referenced to temperature skill is better than the patient skill we know which regions we have more skill and less skill um we know enough to engage in co-production and collaborate with users so i'll just highlight a few next steps things i think we've learned from sub-ex and then we need to go forward with in terms of improving our sub-seasonal skill um and usefulness we need to better leverage these forecasts of opportunity i i feel like this is our best hope for better skill on these time scales and we've seen many studies showing um how you know many different types of forecast of opportunity identifying forecast of opportunity um but we really are not using that i think in a very um good and systematic way to its fullest um and then i'll underscore what ben mentioned about a experimental prediction framework we need a stable user focused infrastructure where we have this use this um uh prediction framework of of re-forecast ability to evaluate skill and real-time forecast and our current sub-seasonal infrastructure is fragile and not sustainable um and then there's this issue of support for users this is um large multi-dimensional data sets um and it's challenging for non-specialists so i think to really um support the uptake of sub-seasonal predictions um and meet this um fully meet the goal of of the report we need to provide better support to users in working with these data sets to be able to get to the information that they need so i'll conclude there and thank you very much oh thank you i was just about to give you a one minute morning thank you what a treat um mike anderson hi it was me now totally different um my name is michael anderson state climatologist for california i get to take all this information and try and figure out how to inform resource management in the state's response to extremes so let's go to the next slide and we'll look at this first from the biggest scale x-axis is the statewide average temperature for a year y-axis is the statewide accumulated precipitation big triangle is the period of record average circles are 20th century squares 21st century a few years of note in there including 22 as the gold star 23 uh it's the purple star so easiest question for seasonal forecasting this time of year for california it's going to be wet or dry so really only looking where on the vertical axis are we going to be good question um trying to add a little more into it are we warm or cold and you can see with the squares that answer was pretty easy for most years of 21st century we've really only had three years that fall into what we would call a cold year including last year a little bit of surprise uh based on the information that we have at this time so let's go to the next slide try and dial it in make it a little more useful all righty so this is the past decade and this is taking snowpack precip and temperature three key elements to water resource management and looking how it falls in the historical distribution as it fall into the expected category the middle of the distribution not too many years fell in there in fact only 2016 the Godzilla El Nino fell into that area for precip and snowpack uh the square orange squares temperature you see really lying almost in the extreme or anomalously warm category until last year one little bit of curveball thrown in uh but we look at this in terms of resource management and planning two extremes we know what to do if it falls in the expected zone our entire systems are built on that and it's a bit of a bummer that only once in a decade we actually end up there and can have everything work out nicely now when things are anomalous have to make some changes have to adapt within year we can probably do that so when we get to extremes that we really need to look at some of those bigger response components sometimes taking legislative action, budgetary action that takes a lot of time to negotiate so the longer the lead time the better so where do we fall where are we going to fall and then uh how how do we provide context to this let's go to the next slide let's break it down even further so we're going to talk about building a water year and then put up here this is what i use this year and i talk about what happened last year what happened this past water year was really fantastic because we start the water year entering potentially a year forward drought with the line in place all expectations are that's where we're going to end up and up until christmas we were spot on we were setting new records uh for dry conditions and low water availability well and behold after christmas everything changed on the dime that category b the january atmospheric rivers provided winter 18 days 86 of a seasonal snowpack accumulated in those 18 days and 46 of the water year precipitation fell in that time zone uh great for annual statistics right not much of it's usable when it happens that quickly another thing and it goes dry again for a month then we throw in a new twist let's do cold storms that are really wet and let's know where people don't necessarily like snow or whether they're not able to handle it so we have people in napa snowbound uh so some new challenges there uh but we're not done there yet because then we follow on with more atmospheric overseas really hitting the tulare lake bed region and relief starting the beginning of the reemergence of tulare lake first time since 1983 uh these storms provided so much water the small creeks below the regulatory dams provided flows that exceeded the downstream channel capacity of the major rivers so we're seeing seeing things on the scale completely unlike anything uh we have as a historical guide a little bit of a challenge like if you could wrap up in the next minute yep last slide and we're on it next so here's how we look at if i'm talking about forecasting a water year this is how i break it down break it down season looking at when we get our fall precip onset that's really important that has a lot of impact in terms of our spring runoff uh warmer cold when we get those um monster heat waves or when do we get some surprising colds those are important for our at-risk populations and soil moisture state when that snowpack sets in because that helps guide spring runoff winters and winter dry spring late season bailout early shutoff knowing if we got a long wet season short wet season really important but maybe forgotten is the summer how quickly do things dry out take advantage of our Mediterranean climate understanding what season dry season but understanding how those play out how the seasonal progression might be disrupted what are those disruptors is it something anomalous or extreme once we get to April we like to know well what about next year so multi-year prediction still important and understanding climate change how much different are things going to be relative to our history because it's not playing out the same that's all i have thanks thanks mike you really make your job sound appealing there or i guess our challenge is how do we make your job easier um the last speaker in the session is linda herons am i not sure i'm saying your last name correctly please correct me if i'm wrong hi yeah linda hyrens that's fine um i'm based at the national centre of atmospheric science in the university of redding and i'm going to talk a bit about some of what's been touched on here but using a co-production approach to support more effective application of sub-seasonal forecasts with case studies across Africa so quite a different a different step here and this is all part of work that was the global challenges research fund gcrf african swift project and african swift had access to real-time sub-seasonal forecast data as part of the s2s prediction project real-time pilot along with other projects as well so i'm going to talk a bit about that um so really within this and with the real-time access to sub-seasonal data we were running a forecast test bed where we were had prototype forecast products that were co-produced and operationally trialled in real time and here a lot of people have touched on this but really we're trying to understand where skill and use overlap so the co-production approach brings together different knowledge sources experiences and working practices to jointly address these issues of shared concern and in the weather context this is really transforming the forecast user from being a recipient of information to actually being participant in how that information is being generated and how it's developed and communicated so some of this has been touched on but on the skillful side we really want to understand why where and when our sub-seasonal forecasts have skills so do we understand the regime dependence skill do we understand the sources of predictability can we say with more certainty when particular times have skills so this is an example from a study within the project looking at the east african long rains during march april may and i don't have time to go into the details but you can see here that the uh the if you have the observed response to the mjo then you have you have um enhanced skill in weeks three and four so this is really about understanding those windows of opportunity forecast opportunity that people have been talking about so understanding why where and when we have skill on the other side we need to understand what decisions users are actually trying to make so how can these forecasts be translated into things that are actually supporting real-life decisions how can they build resilience and build the resilience in these at-risk communities so within swift we had six different operational groups across these sectors and within each of these group three had forecast users forecast producers we had researchers coming together to have an iterative dialogue around where the skill and the windows of opportunities are and how that can be interrogated to inform some of the decisions that were being made in these in these contexts and there's lots of different papers covering those studies and you can read more about those but just two quick examples so within kenya we were working with kenjen and energy company to to develop a spoke forecast to support hydropower planning so this was producing precipitation forecasts to help our energy planners maximize the dam levels within kenya so that they could maximize the use of hydropower and actually during this three-year forecasting test but they had they were able to have unprecedented uninterrupted energy during that time so these forecasts really have key applications another example working with akmad who are a pan african organization african organization and we were working with users from the wmo the world health organization to help co-develop early warnings for meningitis outbreaks so this was producing bespoke multi-variable sub-seasonal forecasts for meningitis early warning combining variables that we know to affect meningitis but combining those with user-defined thresholds health thresholds to help us understand the regions that are going to have more likely to have meningitis and we were able to extend that preparedness action window by up to two weeks in some cases so really useful information being provided directly and co-produced with users so i'm really kind of advocating here that you need to understand both of these parts of the problem so you need to understand where the skill is why where and when you have that skill understand the regime dependent skill and the drivers but also understanding what these services are actually being used and how they're being used to make decisions and that's where you're going to find reliable and actionable climate services by co-producing these services with the users and some people have touched on this already but it has challenges and this is the last part i want to touch on so some of the lessons that we've been drawing from this actually having an iterative ongoing dialogue with users involves building relationships and maintaining relationships and that is really resource intensive and it takes a lot of time and it takes a lot of people working in different ways from maybe they're used to so following on from that that requires capacity building of all of the groups involves it involves users helping users to understand what windows of opportunity mean that there might be more certainty in a forecast at different times on spatial and temporal scales but it's also about us as researchers and forecast producers understanding the context into which these forecasts are being delivered how they're being used and leading on from that evaluating those forecasts we can't just use meteorological metrics of verification to evaluate our forecast we need to be thinking broader than that and understanding the value of these forecasts in terms of whether they're actually helping to support a decision and this links with what Andy was talking about about the next stage of the S2S prediction project moving on to Sage where within Sage there's going to be a whole value chain look at this all the way from the underpinning science supporting the skillful prediction but through to the application to users and so i'm going to stop there thank you yeah thank you Linda okay we now have at least 15 minutes of for questions and i see Neil has his hand up um so Neil can we re-pin the yeah somebody's doing it to pin the panelists go ahead Neil that's painful thanks Mary um i have a one general question and one very quick specific one but just to educate me i'm a poor atmospheric chemists but my understanding was the um the limitation on short-term forecasts i did learn meteorology from Ed Lorenz was effectively a couple of weeks chaos time scale where predictability kind of goes out the window on a synoptic spatial scale uh first of all i might be completely wrong tell me if i am but then what is the equivalent at the S2S scale what are the spatial scales and what are we up against in terms of some sort of chaotic envelope of the variability oh and the specific question was to Mike isn't isn't your 21st century shift just climate change like the last thing you had on your slide or an answer yes okay now first question yeah anybody want to take what's the chaos scale i guess my bias is i think all scales are chaotic um i think there's chaos on the largest scales as well as on the smallest scales in fact um uh i think you can you can show i think you can show from a dynamical systems perspective that the end so forth signal exhibits chaotic behavior yeah but they have different temporal instead those temporal and spatial scales are correlated so that was what i was sort of trying to get at and how it applies in this S2S window yeah i have to think about it i'm not sure okay turn it on it's here a bit here uh we think about seasonal progression right and there's so many forecast information you can rely on climatology we know how things progress from the solstice the equinox equinox the solstice in each of the four seasons we have geophysical elements that we can start there then we try and look at where in the earth system you have disruptors to that and i think it's understanding the scale of those disruptors where they originate and then their influence that interrupt that progression to an extent that it would motivate some type of response okay let's move on to another question linda um yeah thanks mary this question is primary primarily for linda but others could certainly join in um yeah it's about i really appreciate your focusing on the co-production aspect of this um and as you know of course co-production is a very big topic in terms of long-term climate change and so i'm i'm wondering if you could comment about what can be learned about co-production across those time scales in other words for example what could long-term climate change folks working with co-production learn from co-production on seasonal time scales yeah thanks for your question great question i think uh there's a lot to learn and it touches on one of the questions in the in the chat as well around how you scale co-production because it's very bespoke it's very unique to a context and it involves a relationship that you build on but i think by using these frameworks for co-production and understanding i rushed through it but the building blocks of co-production and how you run through that iterative process about co-exploring the needs co-developing those solutions co-delivering those solutions having a framework like that that you can pick up and scale to other areas and other contexts is really helpful um how we learn across time scales i think it's a very different in climate longer-term climate because here we were running an operational test bed with new forecasts every week so it was very much a moving target but i think we can definitely learn from the different time scales and apply some of those lessons can i just give a really quick follow-up question do you think the fact that on season season or forecasting timing you can actually verify them right you can verify the forecasts in a timely fashion whereas for longer-term climate change you sort of can't and how do you think do you think that difference um can be explored in a way to help the longer-term climate change people yeah so it's an important point that you're working on timescales within which you can verify what you said what you said last month you can verify verify next month which is really helpful and it helps you build that uh that database of those windows of opportunity and how they apply so i think we can yeah how you how you apply that to longer-term long-term climate projections is challenging one thing we need to remember when it comes to co-producing with users is that they don't think in weather and timing weather and climate timescales they just think in planning timescales so those sorts of questions are being asked by these energy planners in Kenya the longer-term so so it's all linked in in the user's mind i think in some aspects right thank you happy yeah so Ben mentioned the importance of systematic errors systematic errors systematic errors and we know that AI if it is good in anything is to learn systematic things many patterns that systematically appear so can i assume that there is a fast hope to make fast progress in this intermediate thing on basically post-processing on top of traditional models yeah i think i think certainly correcting the anomalies the systematic errors in the anomalies is possible but the way i sort of think about teleconnections in general is that they're superimposed upon the mean state of the climate of the model if that mean state is dorked up if you will then then the teleconnection is likely to be dorked up and so we really do need to figure out how to make sure our tools as the forecast evolves have much smaller systematic errors okay thank you joe yeah thank you i don't know why my camera is not coming on i apologize for that um a question for for linda and perhaps others as well you talked about you know the challenges or the the mission to get this information into the hands of decision makers so that it actually can be um kind of acted upon and this is a big question but briefly what kinds of strategies techniques methods are being used to actually kind of make that connection explicit versus just kind of putting the information out there and hoping for the best yeah it's a good question i mean i heard somebody at the climate conference in kigali talking about throwing a forecast to a user and hoping they catch it so i mean i think i think we can do a lot better than that um and i think in in this in this context it was about relationship and it was about having a three year long relationship where we were interacting with users weekly to understand the challenges as they were you know as we were kind of battling with that uncertainty together um a real challenge here is the sustainability of that because it's part of a project initiated service right it's linked to a research project and and and as that was linked to a wmo real-time pilot which was finishing and access to that data of real time was finishing so i think there's there's a real challenge there but relationships are key to making sure that those forecasts can be can be answering a context question but also co-evaluated as well hey um interject a question of my own here and i want to go back i think ben made this point but we heard it alluded to further i think with kathy is the point about test beds where um you know that we have setups in a way that can really help us towards um evaluating the usefulness ben can you comment on that a little more well it's very i think fundamentally it's very hard to for individual researchers to envision how their research is ultimately going to influence operational forecasts and the uptake of operational forecasts when they're not able to use the operational systems if you will to test ideas and so it's it's hard to get that community to rally behind uh really digging in and trying to figure out better ways of improving um uh operational forecasts without them being able to use that in research and that that requires a fair amount of infrastructure and support to provide models data data assimilation systems and computational infrastructure so that researchers can actually uh engage in uh the models that are used to make forecasts okay and you mentioned that we've tried to do this is it are efforts failing because of resources behind it and commitment i think the simplest answer to that is yes we're not providing enough resources there are examples of of of success and i would say modest success you know the n car family of models really encourages the research community to engage with their models uh and you know the ufs development is trying but to really do this right i really to have a comprehensive strategy we we we need a lot more resources thank you any anybody else want to comment on that yeah i just want to highlight this this providing of the the data the data infrastructure component right the the ability to have these these forecasts in available to everyone for research use the availability of the real-time forecasts for for applications development and all of this you know infrastructure does indeed require significant resources um you know that it's not just as Linda mentioned you can't just throw it over the wall and hope that hope that the user or the researcher catches it um and so i just uh you know kind of underscore the importance of that a test that does an important infrastructure yeah and and i would comment i think when baske had a similar session on kind of decadal forecasting in the spring that's one of the things that we heard there that the resources kind of required to do this actually leave us quite vulnerable that only those with the you know with the bank role can really do it so if we are interested in in seeing some equity um and addressing a full set of social issues here we're going to have to it seems to me we'd have to be making some changes so um other questions from the board and i'll be looking um in um i'm looking in the chat to see if there's anything we should be calling out there mary can i make a comment about the test that uh based on the past experience in history this is gossum asra yeah i hear you gossum please go ahead yeah thank you for your comments and bends and everybody that the the concept of test that is really not um somewhat alien to you to us then another to know that within the w m o w c r p we have tried we did try to do it and we would be succeeded by establishing this to us project which was a home and facilities and capabilities and it's it was connected to the w m o regional centers that was the idea behind connecting the research all the way to the users in the us in the early days of the earth system science um framework um we used our national capabilities to the point that ben was making about n car having the facilities and a network of universities that are the hubs of innovation and we thought that they could be the bridge between the research community and the operational centers and operational agencies such as noa navy and others it was the money that was required to glue the pieces together and back then nasa made the commitment uh to connect the dots so to speak and fortunately you know i mean it's it was up to the agencies to keep the coalition together to provide the infrastructure and and then finding the resources to to maintain them so we have done it and fortunately we abandoned it i think this you know what we have been talking about provides the motivation to rejuvenate it reenergize it once more so it's doable it's just a matter of bringing the the coalition of the willing together to make it happen thank you thank you okay um well we're going to draw this session to a close i would again point people towards the chat especially our speakers if you have few minutes to address any of the issues in there that would be wonderful we have now a 10 minute break uh and we'll be back at the top of the hour i think the suggestion is just mute yourself and leave your camera on and off um and um we'll be back at that point and i think amy is going to moderate this next session amy macabre i'm going to welcome everybody back glad to start if we want to switch to the slide to start our panel okay um so i'm introducing the second panel i'll be moderating all of the questions i'm looking forward to hearing from all of you i think that that first panel and the opening session brought up a lot of questions about the data driven methods and AI and machine learning methods and so i'm excited that we're going to have a panel about that so we have uh one two three four five panelists um that are each going to give um a five minute lightning talk just as we did with the last one and then do questions at the end the questions that we asked them to think about for the five minute lightning talk were how a new deep learning approach is being applied for s2s forecasting and how data driven and process based approaches to s2s forecasting are performing differently um which i think some of this has been discussed in the chat already so i'm looking forward to to all of us um and i think i forgot to say who i am i'm amy macabre i'm at the university of oklahoma um and i am going to without further ado introduce our first speaker and i forgot to ask are we going in the order that they are on that okay good i guess we're going in the order they are on that slide because i see in the superman supermanian thank you thanks amy um can you see my slides okay i'm going to full screen yes great okay um thanks amy and yeah a great set of talks uh so far in this workshop i'm really enjoying it thanks for the opportunity to present i'll present um about two applications of machine learning or data driven methods for s2s prediction which is largely bridging the gap between weather prediction and climate prediction or climate forecasting so there has been a lot being done on data driven approaches or machine learning approaches for the weather timescale and the climate timescale not as much on the s2s timescales but there are efforts like previous speakers have spoken about um i've categorized the current approaches of data driven or machine learning approaches into these four categories the first one is where we completely replace the model the dynamical model um with a machine learning based approach for prediction so the entire prediction or forecasting is being done with a machine learning model um this is being this is showing promise on the weather timescales um as presented by andy and others on the s2s timescale there is effort ongoing um both at ECMWF and other groups um not as much promise as um on the weather timescales for this um but these are early times for such approach the second one which i'll talk more about in the um upcoming slides is causal dysphoria inference and explainable AI where we try and discover um teleconnections or interactions in an earth system that lead to s2s prediction the third one is post processing of dynamical forecasts um there is ongoing work on this andy also mentioned about this the fourth one is hybrid modeling where we combine dynamical models and machine learning models so i'll present two um um on two studies um which i've been involved with which i know most about one of them is causal discovery analysis led by dany do who's a student here at cu boulder the approach uses of what's called the pc mci causal discovery framework to identify sources of predictability or teleconnections into the um indian summer monsoon region so the approach uses a causal discovery method i'll not go into the details of this given the time constraint um we use geopotential height anomalies in the tropics using six rotated pcs of weekly geopotential height and try and identify causal links between this and the summer monsoon anomalies so using this method we can find a causal graph that identify links between these modes of variability and the monsoon variability this is a graph showing the causal links between in 1980 to 2001 and the bottom graph shows how this these links are changing in the past two decades so that's one approach where we see that there is a one-week lead um monsoon to monsoon causal impact as well as a western pacific geopotential height impact to the indian summer monsoon which is not being shown very much in literature previously and we see that looking at a trend the link between the western pacific and the indian summer monsoon is increasing over time whereas the causal link between the indian summer monsoon and itself is decreasing so that's one example another example is model replacement this is work that came out of a S2S summer school that Judith Burner and I organized at ASD where students looked at how machine learning methods can impact um subsistence prediction of temperature over north america so they use you've got about one minute to wrap up please yeah yeah this is my last slide so they use this method to identify how a random forest method versus a neural network method does in terms of week three scale of temperature forecast compared to a previous method which was led by Matt Johnson the other and what we see here is that the neural network was able to predict better than the other two methods of four week three temperature forecast and then the other benefit of this is we can use what's called a layer wise relevance propagation to identify which regions in the predictor are actually having an impact on the pretend and we see that the tropical pacific has a big impact in terms of predicting temperatures in south part of north america so i'll end there i'm happy to take any questions thanks i will encourage questions in the chat while we switch to our next speaker who is kirsten mayor there we go i just realized i need to unmute myself that would be helpful okay let's see all right great okay so my name is kirsten mayor and i'm a project scientist at ncar and i get right into it i also divided into subsections as niche did slightly differently but um topic i divided into three subsections discovery and knowledge this is kind of like where science comes into data driven methods performance improvement so improving our skill or our speed this is where i see data driven forecasts or bias correction and then trust this idea of explainable and interpretable ai physics informed machine learning and uncertainty quantification this is not an exhaustive list nor are they independent i can definitely see trust falling under both discovery and knowledge and performance improvement as well but this is just a nice subsection division so that i can tell you where i'm focused on in this talk which is discovery and knowledge and um the the research i'm going to present today is um what it's not trying to do is get the best skill or the best performance for s to s forecasting what it's doing is trying to apply machine learning tools specifically to better understand s to s predictability its sources and when we may or may not have a good skill or these forecasts of opportunity so i'm going to go through just very quickly rapid fire some different machine learning tools and by example how these tools can be applied to s to s predictability to learn something about this time scale so one of them is the discard test which is this method where you can remove predictions with low confidence or high uncertainty with the idea that as the neural network becomes more confident it'll be more skillful and we've shown in previous work that as network confidence increases with skill we can identify these higher confidence predictions as forecasts of opportunity we can take it further and try to identify um what the neural network was actually looking at with these explainable ai techniques or xai where we can evaluate what the neural network thought was important in the original input to make these confident and correct forecast of opportunity s predictions and this is just an example from one of my papers with liby barns showing that the neural network identified an mjl like dipole structure for predictability of v 500 over the north atlantic on sub seasonal time scales but you can even look at how predictability may change specifically using these tools under anthropogenic warming stratospheric aerosol injection or different climate intervention strategies or even how sub seasonal predictability may vary on decadal time scale um another interesting um work is using interpretable ai instead of explainable ai this is idea where you can actually look into your your machine learning model to understand what it is doing um in particular this is a architecture developed in gordon at all um that we're using here where there are two networks that are um put together linear are literally combined together to make a final prediction so we can try to dissect the relative importance of each of these models so in this question in this problem we're trying to address the question of what mode of variability the and so and so or the mjl um is most important for making z 500 uh predictions on sub seasonal time scales over the north pacific lastly i want to talk about transfer learning and this is transfer learning combined with explainable ai techniques uh transfer learning is this idea that we can train a neural network on something that has a large amount of data like our climate models and then then use those weights to initialize training for um something on a smaller dataset like reanalysis and this project is trying to identify differences and if there are any differences and sources of predictability between a large ensemble and reanalysis specifically the csm two large ensemble by trying to explore using an explainable ai where the neural network looked for a specific uh sub seasonal predictability problem using when it was trained on the csm large ensemble compared to when it's trained or retrained on observation uh this is a slide just to say that um these research projects that i just briefly presented are not the only ones that are using explainable ai interpretable ai um the discard test etc to explore sub seasonal predictability and with that i'll just say on a positive note machine learning provides many opportunities to learn from large amounts of data to advance our understanding or or even representation of our system and moving forward i think we should continue to utilize methods from the machine learning literature in creative ways to identify and understand sources of sub seasonal predictability or this forecast of opportunity idea that has been talked a lot about today and i think there's a lot of opportunity to look at um that today and in under future climates and then i think uh where we need to improve is we need to explore and document limits of these data driven methods for the sub seasonal timescale machine learning can only learn from what it sees and so i think that's um something very important to remember as we continue talking about data driven methods thank you and thank you for ending on time i just didn't have to break in um and i our next speaker is maria malina from the university of maryland no thank you i'll go ahead and share screen all good yep um hello everybody my name is maria malina from the university of maryland in college park and i'll be talking about machine learning for earth system prediction and predictability predictability b where we consider our inherent limit or potential for skillful prediction and it was interesting to see how anish and kirsten split up their thinking of um uh machine learning for earth system prediction especially on the sub seasonal to seasonal timescales i think about things a little bit differently where i like to split things up into three different stages when we're thinking about traditional numerical weather prediction or climate modeling or s to s where we have an input stage a running the model stage and this being a numerical model and also the output stage so i'll give examples on where machine learning can fit neatly into each one of these examples for the input stage i'll share some work that is currently being led by a phd student in my group at maryland where they are taking some observations from enzo and taking a look at some of the precursors to um what leads to the onset of such events and so here we're looking at times before the spring time season and seeing that we have westerly windburst that take um effect and then we're going to go ahead and continue to see these um in intensifying over the spring and summer seasons and so essentially what we're doing is coming up with these hypothesized physical drivers that lead to an lino onset and then framing this experiment using machine learning to identify potential uh ways that they can lead to correct or incorrect predictions and then comparing this to numerical models and um and seeing how they um relate to that another way that we're using machine learning is in the running the model stage where we are working on improving representations of physical parameterizations one that is quite difficult is lightning activity and so we are taking the nasa geos model we are taking a replay simulation so this is strongly constrained to observations and we are training a neural network to go ahead and predict lightning activity as observed by the geostationary lightning mapper and then going ahead and aiming to replace that parameterization with this machine learning base one another item of note is that while neural networks can be quite difficult to stick back into a numerical model we're also considering symbolic regression where we can do equation discovery using for example genetic algorithms and that would be a bit simpler of a way to um do that replacement and then finally we can also apply machine learning to the output stage it was mentioned earlier that one potential use case of machine learning for s2s is going ahead and actually bias correcting that output from a traditional numerical base simulation and so we are actively doing that and we are finding that we can indeed do that and gain some skill but i want to point out that it's not as simple as just using any metric for skill what skill is defined as can vary depending on the community that you are speaking to and the stakeholders uh respective interests as what's also mentioned earlier and so what we are exploring is generating uncertainty to what is a good bias corrected forecast using a Pareto frontier and just the intersection of different potentially competing objectives or metrics and finally we can of course use machine learning to go ahead and skip all together the numerical running the model stage where we just take an input train a neural network to generate some output and we can also frame experiments to learn more about the earth system not just purely aiming to gain more skill one example of an experiment that we're currently conducting is led by a phd student in my group named gyron who is actually going ahead and taking variables from the community earth system model and this is a s2s initialized hind cast and training machine learning models to explore which one of these different earth system components yields more skill for weather regime classifications and is actually finding that we are seeing results that are emerging based on our traditional understanding where we know that the atmosphere generally provides the most skill during earlier lead times and then we start to get more skill from land and ocean components later on however there have been some surprises specifically when you're considering splitting this up by season where we start to see some changes in where most of the predictability comes from and I won't take his thunder so I'll let him talk about that coming up in AMS and then lastly I just wanted to share that additionally we can also generate indices that are potentially more useful to end users or stakeholders with specific applications in this case we have Hannah who is generating an index by using an auto encoder feeding in various variables and doing a very very strong bottleneck where we have just one neuron having to learn the most saving information from these input variables and then reconstruct those images and so this single node can serve as an index for the area of interest and also the physical variables of interest and it turns out that that can actually be quite helpful when we're trying to link together different physical processes and with that I will end so again I hope I convinced you that machine learning can be quite useful for different stages of the numerical modeling framework or workflow and also to simply skip together all together the running the model stage and going from an input to just an output thank you thank you and I'm we're doing great on time y'all are awesome Dale Durand is going to be our next panel and I think Dale and actually our panelists after Dale are both going to talk about some of this from the perspective of academia as well as private industry are you there Dale yes I am okay I was waiting for your picture to appear here I hope yes there you are well let's get this up okay so I'm Dale Durand at the University of Washington I'm going to talk to you about model replacement in a slightly different context than what's been in the news with science and nature namely parsimonious deep learning weather prediction and the question I'd like us to start with is how many predictable degrees of freedom does the atmosphere actually have and surely one thing we know is that this number must decrease within increasing forecast lead time so when it turns to trying to predict these degrees of freedom and the particular part of the atmosphere with machine learning we need to be aware that we can choose our prognostic variables and spatial resolution for completely different reasons than an nwp and I think this is really important so I'm just going to put up a little table here about the amount of data that goes into forecast from ECMWF's IFS where spherical shell of data is one variable on one atmospheric level ECMWF has 820 of them in their IFS graphcast has 227 NVIDIA's SFNOS 73 74 the ECMWF's AFS 69 for pangu weather and 7 what I'm going to show you here so I'm going to argue we there's a lot of gap between 7 and 70 and 227 and we need to explore this and think about things a little bit more so what we have here is a quick rundown of a couple aspects of our model convolutional neural nets and we are using seven prognostic variables including fiber and hectopascal height two meter temperature total column water vapor we have three prescribed fields including top of atmosphere incoming solar radiation that varies continuously as a function of time of day year and position and we are using this heel pics mesh which is a wonderful mesh from astronomy that we don't know enough about atmospheric science we should use it almost everywhere and I'm going to show you results on 110 kilometer uniform grid spacing so about a one degree resolution but this is uniform over the globe okay so the first thing I want to point out is the model that we now have does a pretty nice job in short-term deterministic forecast it does not beat the IFS it's about a day behind at one week forecast lead time in both RMSE and ACC but this is a forecast of surface pressure essentially C5000 hectopascal height actually and 850 temperature for a storm over the low pressure system over the central US not bad if we but in contrast to many of these other models that have got a lot of press this simpler system has the ability to roll out long-term forecasts without a problem without getting smooth without losing significant amplitude so here's a 1442 step simulation which I picked instead of it even one year because it has a nice low pressure system over the north pacific here if you can pick out there's Alaska up here and so on and the actual verification of course does not match at 365 days you can see characteristically we have 500 hectopascal heights which are the color contour field that are reasonable as well as Z1000 pressure contours in black that are again reasonable after a whole one year of simulation so this is a model you can roll out further in time than many of the current ones and also the other thing that's really important I don't have a lot of time to talk about here is that we can diagnose for example precipitation and do many of the kinds of things that are handed over to parameterizations in NWP surprisingly effectively so we're at this 110 kilometer scale we're taking those same seven variables that were listed on an earlier slide that didn't want to take the time to actually name and we are diagnosing precipitation from the error five data set so of course the error five data sets own precipitation is somewhat suspect it's not a convection resolving model and so on but notice that here in tropical convection over the Indian Ocean we've got a pretty good diagnosis using data it's very course resolved so it's not entirely clear we even need small scales to get the convection right in the machine learning world and finally for my last slide here what I'd like to do is just point out that in contrast to some of the stuff we've heard most recently we actually sort of work with the IFS in terms of skill levels back in 2021 with our older model not as good as the one we're talking now we're trying about now I'm trying to redo these simulations here did everybody update this figure but this is a graph of CRPS which is a probabilistic skill score for ensembles lower is better CRPS penalizes both bad means and wide spreads and so this is the global annual average result comparing ECMWF's 51 member S2S ensemble from the IFS with a grand ensemble we had of 320 members in green and what and then also there's persistence in pink and climatology in in gray so we're beating persistence and climatology as in the IFS but the interesting thing here is that at four weeks and weeks five to six lead times we're equal to them in our CRPS score so this again is an old result we think our new model will already do better and we're really working hard to get this expanded to a full earth system model with sea surface temperature and things like that in it then we're going to redo this but we think we're already at a point where arguably we have our close to model replacement for forecasting on the S2S time scales anyway so that's all I have thank you very much thank you and we will go to our last one and I want to encourage people to keep putting questions in the chat because after our last speaker we will start doing questions and answers from the chat and hands up in the air and with that there's Jason Fertado Hey thanks a lot Amy and thank you everybody for having me today so as Amy alluded to I'm going to kind of wear two hats today I'm going to I'm an associate professor of neurology at the University of Oklahoma who does a lot of this S2S stuff but today I'm going to speak a little bit more on from the private sector side and specifically from a very specific company salient predictions on which I sit as a board on their board of advisors so there we go okay so just a little bit about salient so salient predictions essentially produces S2S forecasts really focusing a lot on ocean variables as the driver of their data-driven models and specifically salinity so kind of the idea about evaporation over the ocean driving sort of changes in the hydrologic cycle which can change sort of atmospheric patterns which affect precipitation but also temperature as well a lot of this grew out of research that started at Woods Hole so there's numerous peer reviewed publications so really started from that in first and then a proof of concept done later during a sub-seasonal climate forecast rodeo that happened here in the U.S. a few years ago and so that was really a case where we were taking that knowledge or they were taking that knowledge and applying it through a machine learning application to produce these accurate S2S forecasts for temperature and preset now that the company has formed now they're producing a lot of these different forecasts but one of the key things is that the production of these data-driven model forecasts are really industry specific which gives another challenge it's not so much of just getting out of temperature or just getting out of precipitation value or probability but it's actually creating client customized forecasts for these different things people in agriculture for example have very different outputs or needs that they want from say someone in supply chain or the financial sectors so just a little bit about their models they're going to high high overview view of it it's a probabilistic forecast model it's quantile based it's been de-biased and calibrated for for reliability forecasts go out on the sub-seasonal scale starting at about week two all the way into about week eight and then there are also extended forecasts beyond that going out to monthly semi-annual out to all the way to 52 weeks several accuracy metrics are made available and these are made available to all of the clients both probabilistic and deterministic reforms as well there are also classification or categorical scores also provided again depending on the user I also did want to point out you know the model as data-driven model is being updated constantly so right now they're on version 7.1 and this is sort of one of the nice and nice things about data-driven models is that there's a rapid way a rapid cycle of experimentation for these things in a rapid way of sort of getting out sort of newer models that could also be seen as a maybe a challenge as well as new data comes in but but that's a nice facet of these data-driven S2S forecast models the other thing is that all of these comparisons between say the salient model and with other S2S models out there they're all transparent they're all given as scores and all of these scores are given for the models versus the salient model to see which ones are performing well at different times so that's an important aspect as well for the end user to see exactly where the salient model scores compared to these other ones okay so a little bit getting back to those core questions advantages and challenges of using this data-driven model so a lot of the advantages of using it there's little cost of inference to get these predictions a lot of flexible model outputs so again these are getting at the idea that we have to gear these forecasts to specific sectors and specific industries the model is actually designed to have outputs that are very flexible for those different for those different industries it's conducive for a number of things first for probabilistic approaches also it's very nice to be used in a cloud native compute framework which is of course the way that things are going nowadays a lot of cloud computing and also for open source packages to use as well and as I mentioned a fast update cycle so lots of experimentation can be done rather quickly with the with this data-driven model the challenges are some of the things we've talked about in this panel in the previous versus explainability so again getting away from this idea of machine learning is a black box and really getting into you know why is the prediction coming out the way it is there's also a really complex data pipeline there's all sources of data ocean atmosphere land all different formats all different time scales or different spatial scales there is a high cost to the actual training that is done and then we also our probabilistic forecasts are great there's a learning curve and there's an education that has to be done as well even even from certain customers so having to sort of bridge that gap is a really important part of getting these sub-seasonal models forecast out to to a use and then the last thing is that while this is great with all of our data one of the challenges we always have a data-driven model is that extreme events are very difficult to predict because by definition they don't occur a lot in in the data and they excuse me the training data so really getting our sub-seasonal models to really predict those extreme events is a challenge that that is currently being undertaken by people in industry and also in academia the last slide that I'm going to present here is sort of how I kind of see sort of all of these things together so we've talked a lot about data-driven processes and a lot of these talks we've talked about the physics-driven models a lot and so really what I see is really has to be a union of these two things but then there's also what I really like about this picture oh by the way this is from a paper by Polario at all in 2019 what I really like about this image is that we have that third circle here knowledge driven so again this is sort of getting at the idea of the co-production and leaning and turning to our users turning to our stakeholders to actually get their input and also there's going to be some knowledge gain from that there are parts of there as experts and other areas that may not be in the atmospheric science or the forecasting community that we could then sort of impart into improving both our physics and data-driven models so really the the key here is we want to get to this sweet spot right right here in between we're all three of the circles intersect and that's that's how I see the future of all of our sub-seasonal forecasting models going through so with that I will turn it back over to our moderator and take any questions so thank you that was a great ending I like that diagram it was great to put you at the end thank you think if we can get all the panelists back up turn your cameras back on everybody who's a panelist and then I think they will they will pin you to the screen here while we're waiting on that um I have to be able to see if I can see hands if you want to put your hand up we're doing this the same way that we did the last round of questions where the vast board members get priority on the hands up but we're still answering questions in general so all right I think we got everybody and I don't see any hands up there yet so maybe we'll take one of the questions out of the chat to get us going um hold on it's what I'm supposed to be doing and I'm reading the chat and looking and listening to all of you and that meant that I did not pick a question right out of there and we have a hand up and we have a hand up now we'll go for that it wasn't there when I started looking at the chat Sanjeev do you want to ask a question yeah so I yeah I saw all all the panelists made a very good presentation so I'm very happy with that my question is um yeah so are you over promising from the machine learning so you talked about it can discover the new knowledge are you trying to say the chart GPT is going to discover the knowledge or is are you trying to say the chart GPT is bringing accessibility uh making that knowledge more accessible so as a panel do you want to consider kind of moderating or redesigning the question that you're trying to ask from the AI that can better serve instead of over promising to the community who wants to take that I'm going to call one of you take it if you're not okay come on can you answer can chart GPT can give you a new knowledge or can create a new knowledge for you if not then what how do you expect a machine learning to create a new climate science for you Maria Maria I'm going to call on you because you and I are doing a panel together at AMS on deep learning and and so I'm gonna I'm gonna call it a way to put you on the spot thanks Amy um can chat GPT discover um yeah so that's a great question so I will say um Amy and I have had some discussions on chat GPT and using that for um asking questions um I will say I am not relying on chat GPT to help me formulate my my experiments right now um but um the way that we're using machine learning in my group is really just to help answer questions that we have so we go ahead and frame an experiment where we ask for example which or system component can be um can provide the most skill for a certain large scale pattern over North America and then uh and then we go ahead and and continue exploring and asking new questions in that way so I think like Kirsten mentioned earlier she was um sharing work that wasn't really focused so much on improving skill at the at the sub seasonal lead times but really using it in a way that we can answer new questions so um so we hope that there could be knowledge discovery of course that's very much with a lot of human interventions since we are the ones formulating the experiments um but we'll see as we go ahead how things evolve thank you I see Dale I threw it at you oh Dale had his hand up go go ahead Dale I have one more answer I think Maria did a great job answering that I just like to add one thing which is that if we're really talking about model replacement a lot of the kind of science we might do as model replacement is not so different than if you're using the CESM because the CESM has a lot of opaque parameterizations and interactions in it and you can't really figure out what's going on so you need to use a hierarchy of models they're progressively simpler to try to understand what's going on potentially we can do that with a model that's really a machine learning model all the way we go back through a hierarchy of a more understandable models and we get somewhere um and the only thing I'd argue the only difference between these two approaches so if the equation based model you can do budget studies which usually don't close anywhere unless it's the first order quantity but you do lose the budget study ability without the equations in a machine learning replacement model but the thing you gain in a machine learning replacement model is back propagation the ability to get sensitivities of the initial state with respect to all kinds of differentiable cost functions so I think this kind of trade-off here and actually I think in terms of model replacement at least there's no reason to believe that we're going to be in a position to do worse science or more difficult we're not going to be able to understand things like we can today with the CESM and I'm before I ask a question of mine and I'm going to throw one more answer in there just because it is national academies related there was a workshop um a monthish ago I don't know time has all run together um run by the not by BASC but by the computer science one and they had a lot of discussions about how we can use AI to help be doing scientific discovery and the recording small the talks are online and I think that might also help answer some of the questions I gave a talk there about how we use it for meteorology but I was the only meteorologist so you have to find the talks um okay I want to ask one of the questions that came up in the chat Bruce Crawford asked the question that I think it's an important one and that is about all these different teams that are working on um the these AI models so the his question specifically says that US and China are competing head to head in the S2S AI ML space can you please compare and contrast any differences in approaches or are both teams following the same general strategy I'm not going to throw that to anybody in specific I'm going to let and see if you volunteer if not I'm going to make you come up with something anybody want to volunteer to answer that and I can start go okay I think it's not just us in China but um definitely in terms of the weather prediction pangu weather has come out of a chinese group um whereas the graphcast and uh forecast net are from US groups ECMWF um which is a european organization largely european organization has their own machine learning model for weather prediction and these groups are using different approaches but still under the umbrella of machine learning um AI largely and they're all showing similar scale and some things one of the models are doing better than the others right um but I think competition is good um in general um from teams around the world um and going forward we see um hybridization of these approaches some of these models are being made open source like the one from nvidia um has been made open source um so that will generate more research on those models right so making them open source is a big um plus I would say for academically research and progressing on this front next anybody else want to say anything okay we will move to hands um I don't know if diponja diponjana I don't know I probably pronounced that wrong and I'm sorry but your hand is next yeah I'm abduble yes we hear you now thank you ami I was present in your uh that particular uh talk it was great on AI and climate and weather modeling and to be very honest that our expectation and ambitions are expanding with applications of AI and ML and so my question to all the panel of course I also like to pay my gratitude to all of you for this beautiful session that now we are also getting the models like future three which is predicting the flood management and human up to the level of human mobility so how this poor climate science and art science modeling and the this application oriented models like where the floods are being protected uh projected along with how the humans will mobility that is captured the behavioral sense it is kind of an end-to-end uh approach can be is being thought under the scope are being taken by any of you cumulus has appeared and I saw that but cumulus is not going to answer the question huh Jason you haven't answered the question yet I'm gonna throw it to you first hi hi I'm yeah I want to try to I was thinking of any I'm trying to think of something as as the question was being asked um so if I how do can can you just can you just rephrase the question really quickly because I think I might have lost I know it's a climate to the human um like climate and weather to the human impact part is that sort of yeah it is actually I was asking that if uh this whole integration that's starting from absolutely uh core climate science modeling to up to the level of uh decision making an impact to the human mobility how the I mean people are migrating due to climate change this end-to-end integration is possible under the gamut of uh application of AI and ML along with the climate sense uh conventional modeling okay yeah so um so my my own personal philosophy on that is that I don't think first I don't think there's a one size fits all to this kind of thing so there's not going to be one application of the climate weather models to one end result for different human applications or for different end users so again I think that there has to be this notion that we have to have we have to think very openly and very flexibly about how we apply our different models um not necessarily you know the you know not necessarily core variables from the atmosphere ocean or land but actually what what are we getting out of the model what do we want from the model how are we post-processing that data I think all of that has to be very unique um in terms of what what we're after and so I think during the panel there was you know several examples of different applications or different specific topics that people were going after and I think that that's probably the way that we want to move forward so it's not a one size fits all is it possible I think for different applications yes but again I don't think it's a one size fits all kind of approach to this Maria you want to take a short answer and then we're going to jump to another question in the chat thank you yes I just wanted to add that I think there's a real opportunity for us here where we're seeing all the creative ways that we can use machine learning and and like was just mentioned where it's very a flexible tool and so that gives us an opportunity to connect more with the humans that we're trying to serve at the end of the day and you know rewriting the way that we're using our metrics and our loss functions and training our networks so that we're ultimately actually getting the thing that we are hoping to get for to deliver to a stakeholder so instead of just focusing on the skill of temperature and precipitation for example maybe there is some other subsequent thing that that human needs and and we have this opportunity now to use these tools so I think that's something I'm really excited about as we're moving forward and I'm going to use that human connection to jump us to the question that's in the chat that I really like that comes from Eleanor about the co-production so connecting some of this with the co-production that we talked about in the previous session how much co-production is being done with data-driven S2S forecasting methods and how do you think users respond to this and versus the traditional methods and that may be well that's a question for anybody who's working with end users I think if the answer is no one is then that might answer the question in a way we don't want well I can kind of I can maybe go after that so I think that this gets so the idea about sort of the response really has to do with this with this idea that I talked about and others have about trust and sort of the black box nature so again it's this idea that we have this you know machine learning model and we put a we shove a bunch of data in it and it comes out with these answers and they look like that they're correct but it's much it's not just being correct so I think that's another part we have to think about it's not so much that the forecast is correct it's that it's usable and it's trustworthy so how reliable is it and then do you know the the workings on the inside of the box and so I think that that's what a lot of things a lot of different areas are starting to do Kirsten talked some of it about the explainable AI stuff but there's there's a lot of other ways to make things transparent and I think that's a huge factor especially from you know any kind of application at the private sector or even the government level is that you have to be transparent in the metrics transparent in the methods etc and the more transparent you are you know the more trust you can build in that going going forward I would say also I think in the weather community they've been looking at perceptions about machine learning and AI just in general and trying to understand just based on that how people may respond to your specific application types of models as well but I don't work in this space but that's something I've heard before well now quickly add that something we're starting to work on here in Maryland is the idea of taking social science data and integrating that with our physical science data and this isn't so much on the S2S timescale more on the short term weather timescales but but just another example how much more flexible or flexibility we have as a result of machine learning to take these different types of data sets and fuse them together and and see the impact on different communities. Anish. Yeah just quickly another framework that's been used currently in Europe is the destination earth which is also called digital earth framework within the WMO framework which is integrating our forecasting models forecasting ability with users who use the data to make decisions and how do we make an end to end framework or machinery that produces focus that feeds into decision making that then can be used in societally relevant ways so it'll be exciting to see what comes out of destination earth especially on the S2S front and there's also similar efforts in US thanks. I'm going to ask a final I don't know if it's a final question we'll get to Bruce last but I'm going to ask a question for everybody and it's sort of a combined question about where you think we need investment in this to make data driven S2S forecasting continue to proceed and related to that if you want to address the public-private partnership because I think they are related I'd love to hear that as well and you can just go I'll go around in the square the order in which the squares are in my screen so Jason's first of course sorry you're the upper left on my screen that's fine no no worries it's yeah so where do I see that we need to invest a little bit more you know I want to go at this at a different angle I think we need to really think about at the at the workforce development stage and how are we training our atmospheric scientists to enter this new realm going forward because we know I mean I think this is something that's not going you know it's only going to expand as as was discussed and it's exploding how do we train our new our new workforce I think there has to be a lot of investment in that and that takes money it takes people it takes a lot of different things and then about the public I'll just quickly say public private sector you know I think that there needs to be more more of those sort of collaborations happening a lot more than I think are happening now and it's good to see some of it on this panel and others starting to do that but I think it needs to be done on bigger scales Dale you are the next square on my screen okay so I certainly agree we have to look into training the next generation I totally think that's a good point to to bring up this is not going away indeed we need people to converse them with this tool and it's a really important thing I also feel like an important aspect of public private partnerships is a chance to really get me be the kind of maybe computer resources but even software developing resources that are very hard to afford a university some of these top programmers and some of these places that have developed graphcasts and forecast net and and the Huawei Bangu you know these things are not a hard it's hard to pay those kind of people at a university so I think it's an important aspect of public private public university cooperation to get the the companies involved and to have a real good give and take still open access hopefully but have a good give and take any sure my next work thanks to me I definitely I think the biggest investment should be in in our future in educating our future on this front training the next set of graduate students and so on in terms of public private partnership I think the private sector has its own motivations but if what it develops if at least some of it can be made open source that's definitely a way that the academic world can engage with but then pushing the research forward that would benefit both and the third quick point I'll make us we should not stop supporting our dynamical models dynamical understanding and scientific knowledge expansion right that's fundamental to generating some of these machine learning models without without a good re-analysis without good observations of the earth system without a good model we will not be able to make a good machine learning model so we should continue that thanks Maria or cumulus cumulus is very engaged at this hour apparently um so I um I totally agree with the previous comments that were made and I guess I'll go ahead and add something new and say if I could pick something to invest in it would be a way to synthesize all of these rapid advances so some sort of sort of chat GPT like a scientist GPT that can um synthesize all of this information that is coming to us um in very very fast fashion and I mean we're having to keep up now with not only our system science advances but now also computer science advances and and also thinking about societal implications and and of course our changing climate so so yeah having some help from artificial intelligence to be able to synthesize all of these advances and help us come up with perhaps what would be like the most impactful um work that we can do moving forward and that's all thanks okay Kirsten you are my last score great um I of course agree with everyone everything everyone said already training the workforce um I want to second um Aneesh on continuing to support our dynamical models and making sure that they um actually simulate our earth system well if we're going to train our models on data we need to have good data um so I wanted to second that um secondly I think we need to focus on forecasts of opportunity I think they're um currently under under represented right now and I think they they can be very useful for for subseasonal time scales um and then more space to discuss lessons learned um from machine learning I often see successful applications of machine learning applied but I think there needs to be a space for where um where the machine learning doesn't work out and and what you learned from that and and how we can move forward based on that so that is the addition I will add okay and I didn't mean to ignore Brian's hand it's been up for a very long time that was a really good ending but I'm going to let Brian ask the very last question but Brian it better be really fast because we're over time the last question um what I was going to ask is a lot of what I hear a lot of people talking about in the space and all of you is the concept of forecast of opportunity and then also explainable AI or explainable data-driven methods and I'm just wondering uh how like how much overlap do you see between those two or how distinct do you see those two concepts from each other I can take that one. Go Kirsten. Go Kirsten. I definitely see them as um very related as we can use explainable um machine learning to actually explore forecast of opportunity I think machine learning provides an interesting avenue because you don't necessarily need to define like MJO or ENSO indices to be put into your neural network you can give it just general fields and then have it identify forecast of opportunity for you and then you can use these explainable AI techniques to then explore what the neural network may think is important for predictability so I definitely think there's um tons of overlap between between these two things. Okay Maria you're going to be our final comment because then we got to hit our break. Super quick we have not discussed the importance of causality and so ensuring that um that the signals that we are uncovering are truly causal not just correlated that's all thanks. Correlation does not equal causation that is an excellent note to end on. Thank you all you all we're awesome um and I think that um if you want to keep asking stuff put it in the chat but I want to give everybody a chance to stretch your legs because zoom fatigue is real and we'll be back in I think five minutes. Okay I think we will go ahead and get started you know back on Brad Coleman uh with Bask. In putting today's sessions together we wanted to hear from practitioners to pioneers and from those in weather time scales to those working on seasonal time scales and and after sitting through these sessions I think we've accomplished that in spades uh really impressive discussions fascinating participation we appreciate it all. In this last session this last sub-session we'll hear from four panelists who nicely cross-cut that mix. We have 45 minutes four panelists the same format so we'll go through the four lightning talks a couple of the panelists have some slides and really how do we you know looking at the timescale bringing what we know from the long and rich history of weather forecasting improving weather forecasts making them more and more valuable expanding the the stakeholder investment and how do we bring that where the lessons learned and I think from the discussion today I think there are lessons that could be learned both ways uh so the questions that we presented our panelists were what opportunities are there to leverage the strengths and of and better coordinate climate and weather communities for us to ask forecasting what are the major barriers and how could these be overcome and how can interdisciplinary coordination be fostered and incentivized to promote shared goals our first presenter is Dave Novak from the Weather Prediction Center NOAA National Weather Service it's all yours Dave. Okay excellent uh terrific just getting the slides up here uh just confirming you're able to see my slides got it okay wonderful uh just a terrific and uh stimulating discussion the last couple of hours here so really appreciate that I'm your weather guy I I'm I'm weather through and through and I also care about precipitation uh leading the Weather Prediction Center we focus on precipitation forecasts and so this figure really spoke to me it's a figure from Bartodale from 2016 and it highlights again it's not weather it's not subseasonal it's not climate if you will but there are these gray areas it's it's a continuum if you will and I really like this figure in in sense that it highlights where we have current skill and perhaps uh where that's just unpredictable beyond the science if you look across the bottom here and that question perhaps a chaos but most importantly here is this user needs area that's in the orange and this is really the sweet spot where you probably could move the current skill just a little bit to the right and address numerous uh different aspects and some of this even if you look on the small scale some of this even uh relays back into the weather timescale right you know if you're trying to get uh and a mayor is trying to answer how hard it's going to rain in their particular city say five six days out um so uh I maybe it's my bias here but I I like to focus on that day seven to the 14 timescale uh because that is where there is some scale with the weather and perhaps if we extend that think about weather on out um we can really start to really foster the collaboration amongst the weather and climate communities I I know NCAR was testing a convection allowing model going out to eight days just to see what would happen if you ran very fine resolution models uh further out Noah's starting to do that as well um in the weather space we think about clustering and uh that these applications are ways of visualizing uh the weather may be applicable to to longer scales as well um so slide two here you know the other other aspect coming to you as a weather whether uh biased person level put it that way um you know I think about s2s is this time period where you have you're talking about the frequency and intensity of weather events and uh and so this is a nice paper recent paper looking at the atmospheric river problem and kind of relating this back to you know these wet and dry periods that Mike Anderson was talking about for California well you can relate that certainly to certain uh you know specific individual synoptic systems and particularly the large scale flow that sets up an environment to have more and more or set up the stage where you have these frequent synoptic scale systems coming into the western united states and they had a nice example here clustering that in the in the upper right here um so again there may be ways to kind of couch the sub seasonal problem in terms of this uh kind of frequency and intensity of these of these weather events uh is another aspect here as both it's both weather and climate I guess is part of the part of the point on the integration aspect I Noah has really taken this to heart uh we have an earth systems integration board that's been trying to really foster collaboration across the line offices one of the key projects is this precipitation prediction grand challenge uh we talked about the biases that in these different models that's one of our rallying cries is to uh address these systematic biases through time to improve precipitation skill from hours to weeks to months to decades through development application of a fully uh fully coupled earth system prediction model so I do think there are uh integration activities that are ongoing and we can speak more about these but this is a nice example I think that cuts at um both across scales uh one of the hardest predictors we've we've talked about precipitation you'll note some of the ai fields are not yet there with precipitation I'd be uh be interested in talking about that and thinking about that problem in the s2s time frame um and I think that again that collaboration can help uh spark some of these interdisciplinary discussions and then um Dave DeWitt I think we'll speak next but we've really been growing an operational partnership here between the weather prediction center and the climate prediction center uh one of these kind of neat examples of seamless service across these timescales has been heat key messages so you think about the summer we just had these incredible heat waves on the upper left here we worked uh this was the climate prediction center June 7th highlighting you know getting the word out if you bill on this upcoming probabilities for really a historic event in the middle here there's kind of this handoff between climate and weather um and on the right here is a weather graphic that includes both weather and climate information here in the lower right uh so example here we have both the weather aspects here in the climate aspects and there's this internal collaboration that's ongoing through different various tools so I think this is a budding area uh this is particularly in that that day seven eight nine ten to week two uh time frame here but I think that is an area of collaboration that perhaps we could really leverage and work together as both weather and climate so I'll stop there super thanks very much Dave now going out Noah's timescale a bit we'll be moving on to Dave Duit uh from the climate prediction center great and I apologize I'm having trouble with the camera so if you could go ahead and advance the slide I'd appreciate it thanks and if you could just remind me how much time I have I'll let you know when you have about a minute to go yep but total Brad five minutes six minutes yeah about five yep okay great yeah so thanks for the opportunity to speak I've enjoyed the talk so far so an important point to make which sometimes people might not appreciate is that when they pass the weather bill which I guess was around 2017 they defined weather with respect to NOAA products and services out to two years so when we talk sub-seasonal to seasonal inside NOAA that is considered weather we all know that it has climate aspects but just for definitional purposes I'll say for those who might be disappointed that I'm not going to speak about systematic errors I'm glad that Ben foot stomped it and the importance of addressing those the only point I would make on that is that I think that we need to address systematic errors from the weather time scale out as these errors onset quickly and in order to maximize our chances to get to root cause we need to isolate them as early as possible in their evolution so I'm going to take a slightly different slant than you might expect in the beginning I'm going to caveat that by indicating up front when I talk about programmatic aspects and this goes to the barriers question I'm going to speak about my personal opinion it's not an agency view and I do want to recognize all the great work many colleagues for many decades here on the meeting and many more recent colleagues a lot of great progress has been made in improving sub-seasonal seasonal prediction I think our progress could be greater and faster if we changed our funding priorities and we will be an amorphous larger governing body whoever that may be so in particular I think that while transition is great and when things are ready to transition we should focus it on on transition but I also think we need a balanced funding portfolio that provides explicit funding for some higher risk higher reward research that's operationally focused but may not result in a transition in a year or two years or maybe even three or five years right but sending the seeds for important understanding to ultimately make greater progress down the line so you know two areas where I think that's important are deep dive diagnostics and predictability studies and I'll give a couple of quick examples of those so with respect to tying weather to climate it's important to remember that improvements made in the fidelity of simulating S2S variability and the dynamical models are going to translate into improved fidelity and monitoring modeling excuse me key processes in the climate change models I think that when we're developing prediction tools we need to focus on events that were not predicted well and unfortunately the last 10 years or so give a fairly large number of those on the S2S timescale I look at those as science challenges and in particular you could look at the winners of 2014 to 2017 2022 2023 and then the 2017 flash drought I think we need to recognize that over the last few years let's call it 10 to 15 and the ENSO events that you know should have been the dominant forcing or in some some views are the dominant forcing have been dominated by sub-seasonal variability such as the MJO and sudden stratospheric warming and I think that there's also exciting work being done on other modes of stratospheric variability that impact the S2S timescale such as the QBO and I'll note for the record that we're about to go into an eastern phase of the QBO for the coming really large ENSO winter it will be very interesting to see how the El Nino impacts play out finally I think that we need to focus efforts on regime transitions and how decadal variability can impact the S2S signal next slide yeah so this is a great study that was done by Andy Hole for NIDIS and the basic theme here is that for the northern plains flash drought which depending on you count is from 2017 which onset in the late spring into early summer 5 billion to a 10 billion dollar disaster for crops and pasture and yeah sorry losing my thought here so anyway the bottom line is that if you look at the forecast forecast that will lead beyond seven days we're not able to forecast the deficit and precipitation so you start the forecast every day this happens to be for the guests doesn't matter what system you look at day forecast from day one when you look at the forecast lead sorry from day one day two day three what you tend to find is that they forecast a precipitation deficit that's the black curve that's what we got again this is considered a flash drought of what we now call rapid onset drought if you go and look at the seven day forecast even as you are deep into this drought these forecast all had no deficit right they were essentially either above normal or near normal not recognizing the fact that the surface was already dried out so on the weather time okay great so go ahead one slide yeah and so this again will just speak to regime transition this is coming out of the 2016-2017 would have been a very extended drought in California and the southwest U.S. despite the fact that it was La Nina forcing which favored an enhanced probability of below normal precipitation we had a record number of atmospheric rivers that ameliorated that drought in about two months if you look at the lead time at which leading models doesn't matter which model you choose we're able to forecast that regime transition about a two-week lead and again a lot of stakeholders cannot use a two-week lead forecast information for such a rapid shift in conditions for instance monitoring water resources or agricultural decisions so that's it thanks Brad awesome thanks very much Dave okay moving on now we'll have Andrew Robertson international research institute institute climate group world weather research program Andrew okay thank you very much for the opportunity I think no slides were requested but I did send along one if you want to put that one put that one up just on the S2S project so I was the one of the co-chairs of that that international project and as Andy was saying in the in the first session really the goals of that were a joint project of the world weather and the world climate research programs so there was interest from wwrp side to push out the forecast horizons beyond two weeks and there was increased interest due to from the climate change perspective in in in daily weather so there was a sort of coming out there was a there was an interest to come together really to bring these communities together it was a 10-year project it's just coming to a close at the end of this year so I think there's many opportunities to push push that work forward what had been done so you know a lot of work was on the the basic science and the modeling to begin with and it was always had this operational bent to it or research to operations service side based on on the bringing in the the the the modeling centers around the world so there are 11 operational centers and one thing that has come out of this one of the the the successes has been that that now after the project there will be a WMO designation for global producing centers of of sub-seasonal forecasts and there will be the WMO coordination of those in terms of a lead center as they have for seasonal forecasting and that will actually be hosted by ECMWF so I mean very good news is that a lot of the the work on and sort of foundation of S of the S2S project was around this database of report cast hind cast from from these 11 models being run around the world for forecasting operationally and the forecast being made of them this database will continue ECMWF will continue to host this so I mean that in terms of infrastructure I think that that's one thing that's that I've heard mentioned several times you know the the importance of that there was something that was stressed in the 2016 report and we're fortunate that in WMO and ECMWF there's there's also a lot of push there to maintain that infrastructure but I think from the US point of view that that should also be something high on the agenda to consider this because I mean one thing that we did learn from from doing this you know looking at the skill of forecasts in in those hind cast is that it's building multimodal ensembles calibration of forecasts is that it's difficult it's more difficult for sub seasonal forecasting than it is in the on the seasonal scale just because of the complexities of that data these are usually going out to you know 45 60 days in advance but often the refocals start on different days of the week and it's difficult to build multimodal ensembles so it's important to have have a good usable infrastructure and then getting down to thinking about the use of such forecasts to have the tools that are that are that are really essential for translating such forecasts into into products so a lot of focus was on on you know assessing skill and so and so forth and and the modeling challenges sources of predictability in in the first part of the project but but one thing that there was more focus on in the second half and that Linda Hyron's really beautifully highlighted was we we ran a real-time pilot project which was involving groups that were already thinking about well how do you how do you translate into user actionable forecasts to to stakeholders in the sense of of climate services so we had 15 or 16 of such projects and one of the really the the flagship ones of that was the Africa Swift that that Linda talked about but I think in the sense of going forward and she she mentioned it also strongly much more work is needed on that she mentioned co-production as being as being critical and there's a lot more work is needed working together with users to to develop a usable products so I'll just also mention uh yes there's something that that she also highlighted was you know that the need for such uh forecasts in the global south so if you look at where s2s forecasts have skill uh is similar to the places where seasonal forecasts have skill it's you know that the tropics is really a a region of opportunity if you will uh the mgo but another key driver okay yeah thanks and uh you know the the most direct impacts and teleconnections are in the tropics uh and if you think about you know what's needed now in terms of the the needs for climate change adaptation it's adaptation in real time it's the it's the early warning early action where you can use uh forecasts on the on the s2s time scales to to help build resiliency so there's there's a big overlap there with international development so uh there are there are many uses and for sure one should push that within domestically within the us but I think you know in terms of you know equity very important to highlight the importance of these forecasts for in in in the developing world and uh in within the context of climate services and I think you know just building on this weather to climate the the notion that was also really built out strongly in that 2016 report was uh the the the notion of seamless uh forecasts so that was also mentioned that you know users uh don't don't make the distinction between the different time scales and it's about decisions and there was a beautiful graphic in that 2016 report uh for the different sectors on the need for what kind of decisions you have on different time scales that really go from the daily to the weekly monthly seasonal uh and and beyond uh into the planning time scales associated with with uh where climate change projections can be used and I think you know building out the seamlessness and thinking about how uh you know s2s can can connect with people working on climate change are also very important so thank you I'll stop there thank you under very much appreciated okay the last panelist uh for this session and for the entire afternoon session is going to be Andrea Lopez Lang from the University of Madison Wisconsin Andrea great thanks Brad let's see I will share my screen here real quick there we go okay so I also only have one slide that I'd like to share today um I just give you a little bit about a little bit of background about myself I'm Andrea Lopez Lang I'm currently visiting professor at the University of Wisconsin um I was a co-lead uh and several people on this call today have been on the s2s task force put together by NOAA's map program so I was a co-lead of that and what I'm showing you on the screen right now is um a summary figure for one of the special collections that was put together in the jgr uh special collection there were over 50 papers thinking about s2s topics and different not only phenomena that contributed to thinking about uh enhancing predictive skill at these longer lead times but there were also a lot of uh papers focused on thinking about ensemble generation or resolution systematic biases so there was a nice summary and I think a lot of the topics that were mentioned since we're thinking about this what has been done since the 2016 uh report it's really summarized in this figure where we're thinking about forecasts from this uh you know really two week to a season time scale if you look at this map you see a bunch of nodes that could represent the nodes in an AI model but really the interconnectedness of not only physical processes in the atmosphere but processes in the ocean but also model configuration and what's missing from this is really thinking about some of the AI uh topics that was meant that were mentioned today so in thinking about how we can bring together communities bridging the weather community and the climate community I think a lot of really good points have already been brought up I think the weather community was built really thinking about end products and user bases so I think that the weather community has a really good foundation and information about how that was done so that's something we can think about in terms of how we can build a community of practice I think from the climate per side you know we're thinking about probabilistic forecasts and communicating uncertainty to a public so there's a lot of resources out there in the weather and climate communities thinking about communication aspects of this problem the other thing that I wanted to talk about and that was brought up earlier in the last session by several speakers was really thinking about workforce development and I say that as a professor I say that as somebody who's training the next generation and I think about the courses that are currently taught both at the undergrad and graduate level if you look at this there's a lot of things here that aren't necessarily founded an undergraduate curriculum and that's largely because most undergraduate curriculums in the US that focus on atmospheric science or meteorology I'm not thinking about the climate curriculums I'm thinking about atmospheric science and meteorology are really guided by the GS 1340 guidelines that were developed in 1995 so there's not a lot that has anything to do with thinking about AI there's not really that much content on a lot of the phenomena that had really been the focus of a lot of research attention over the last two to three decades so I think rethinking some of the guidelines on how our undergraduate curriculum is put together to really get students interested in thinking about this next big topic in atmospheric science is really thinking about sub-seasonal predictions how we do the applications and how we really do this co-production I think that there's because we don't really have this in our curriculum yet there's a lot of opportunity to really do it right the first time to think about co-production to think about the jobs that students will have when they're thinking about sub-seasonal timescales the other thing that I wanted to mention is looking at this figure you see a lot about specific types of variability you see in the bottom here in the gray text about model development there's a lot on here and it's been mentioned before that to think about all of this it's a big data problem it's also a big expertise problem to making sure that you have understanding about all of these relationships so not only do we need capacity to be able to think about this problem we need the funding to support that capacity building and I think that this goes from basic research funding to research to operations funding and I think it's already been mentioned by several people that to think about this research to operations you need to have that high risk high reward ability to think about topics there's a lot there's a lot of potential here but a lot of the sources of funding require some sort of product or outcome to exist it's not really flexible for that high risk high reward type of funding the other thing that's been mentioned and I want to reiterate is thinking about the academic partnerships with private sector with other organizations that are thinking about you know sub-seasonal prediction and its applications really thinking about how far ahead sort of the private sector is and thinking about some of these topics versus you know various aspects of the academic or public sector so really building these partnerships that cross traditional sectors and the last thing that I want to mention and I'm glad Andy's on this call is just thinking about international collaboration the US is doing a lot you know we had the S2S prediction project task force that really focused on these topics it was a good source of funding we were seen sort of as leading the funding in S2S but now it's sort of now that there's a transition to this sages this S2S for agriculture and the environment you know we need to think about how the US can contribute to that and sources of funding so I'll end there awesome thank you Andrea okay so we're moving into the final discussion of the overall session on S2S start queuing up your questions and either chat or by raising your hand since is this is the last session and the meeting will be adjourned at the end of we have about you know 17 minutes or so of discussion we can broaden it out we call the motivation for BASC actually holding this is very much so that we can see how that landscape has changed opportunities challenges from our initial or the last study in 2016 to now so lots of opportunity let's focus some on the initial topic your weather and climate emerging the two and then we can continue through the rest of the time lots of discussion thanks everyone for the great discussion in chat so Neil you're up I'm ready to go thank you um Andrew you brought this up a little bit but are there specific equity issues tied to at sort of the S2S scale and also thinking of co-production and the public private partnerships and are there are there things that should be thought about and done ahead of time from the from the get-go to address those I mean I guess I was thanks very much for the question I was thinking about climate injustice and how you know many of the countries that are feeling the biggest impacts and the most vulnerable to climate change are the ones who haven't contributed much to the problem and they are often you know facing some of the biggest risks due to you know ENSO, MJO, climate streams so that as an equity issue and it was one that you know the the IRI was founded 25 years ago and that was that was sort of part of the the thinking when of founding the IRI which was funded for 15 years by NOAA that there's this new breakthroughs in ENSO prediction should be you know brought to bear to help society so I was thinking of it in that vein and I think you know within S2S there's there's even more opportunities there because you know you have that other big S2S climate driver the MJO also also in the tropics so I think there's there's lots of opportunities for developing really useful climate services using S2S climate services that can help adaptation to climate change in in in the S2S realm thanks. Thanks Andrew and just for other speakers other panelists you're also welcome to join the group here I think I'll move to Mike Fra. Thank you much for that can hear me. Yes okay so so this is a kind of an open ended question and maybe it's not just for this panel but for previous ones and that is the ability of the let's say those of us in the government to move quickly and maybe the lack of our ability to do so uh in terms of being able to bring in new capabilities and we've seen recently the private sector move very rapidly in this space which some of the speakers spoke with earlier so kind of an open ended question to the panel is uh unlike perhaps some of the other things that we have done in the past um is this really an area that we're going to be required that in other words eight do we really think that those of us in the government sector the public sector are going to be able to keep up or or compete with the private sector when it comes to AI predictions of it's in this S2S space or is this something that we should maybe be a lot more active in collaborating with the public excuse me the private sector and if so what are your ideas for doing so who would like to respond sure i'll i'll take a first swing at that yeah no i think you're right mike i think the european center has has let out on this uh and you know as as alan thorpe a previous director of ECMWF used to say if we don't go in the right direction the first time we'll be quick follow-ups i think they have formed these private partnerships i think that the weather service starting with emc is going to go in that direction cpc has our toe in some of these areas but certainly the ability of these companies as was emphasized in the previous session to move and explore and innovate is much greater than ours and cpc were technology agnostic you know if there are better tools we want to know what those are you have to kick the tires though i've in my experience i've seen a lot of forecast tools that did well in sample and uh or or for a small training period and you got them out in real time and the boat sank pretty quick so i i think uh you know i'm i'm an optimist on AIML but i i think we need to make sure we have rigorous testing procedures uh and extended sets of re-forecast and look at cases where we did not do well in the past that perhaps we classified as unpredictable but maybe it was just limitations in our methods anyway that's my response yeah and if i can just add to that i mean as we saw in the second session you know there's there's a lot a lot of people in academia academia are working on this and i've seen you know there's just huge interest now for people working in machine learning ai to you know do internships or just learn about these things in in in application to uh to to s2s so i think you know uh there's there's a huge opportunity in in academia for you know building this out building into to curricula uh and you know going you know as as was all also mentioned you know thinking about going going end to end how how can you how can this uh help as you get into uh tailoring these forecasts for particular sectors and often in those sectors there's there's been a lot of work uh on using ai methods recently such as in hydrology a lot of people use using ai so uh there's there's there's low hanging fruits in in terms of you know connecting connecting these things also getting down to more toward the the user end over yeah and mike just uh i think about some of these thorny difficult difficult problems that we just haven't solved and i think it was in the previous session of where can we leverage ai to have better understanding so it goes to your point and you're in terms of the academic sector as well that there's probably collaboration needed private sector academic sector and government together you know from the academic sector i'll just throw in um there's some really interesting funding opportunities that are available um that sort of encourage these sort of relationships to grow uh and the one that i'm thinking of currently is that there was recently the joint nsf noa iucrc which is the uh industry university collaborative research center or something along those lines but basically um there's a topic and you're encouraged to develop a relationship with multiple industries to work on a current topic of interest uh to both nsf and noa and the current call uh was focused on thinking about um climate risk modeling which it turns out you know like we've heard many times that industry and end users don't think about whether in climate they think about it as a continuum so a lot of what's fallen into this space is thinking about seasonal forecasting and longer um so i think that there's there's avenues that we could explore uh as a community to think about how we can encourage these partnerships super uh real quick for those again old the earlier panelists if you do want to join and get pinned virtually to the the screen raise your hand and the staff can do that for you i want to call attention to de verna debt's comment which i think is a really important one which is we often talk about public private and academia but sometimes ngo's don't get mentioned quite as often and really especially as we go out to these longer timescales it would be very helpful to be very intentional about reaching out and including ngo's when and where we can so next q next in the queue dip n jar jana dip n jar okay again gratitude to all the panelists today and usually what we see that in the weather and climate models uh longer timescale forcing functions that they are taken into uh during the ensemble but as now we are again thinking about the s2s which can have direct uh impact on the end user so is it now time also to take uh also to incorporate the force things like the rain which is being created due to geo engineering or the volcanic eruptions which nowadays we are observing that uh impacting also the city climate to a great extent i've brought up a net i'm casting for four things you go and when thoughts well i can i can maybe start on on that one i mean uh in terms of you know greenhouse gas forcing then you know our seasonal sub seasonal models to use use current current concentrations that don't change much but i think that uh there's there's opportunities to in terms of cross timescale work uh the kinds of the kinds of phenomena that cause extreme events in climate change projection simulations projections uh that one can you know they're the same phenomena that we have that in in s2s forecast so you know for example over pakistan the the the the the big floods uh last year are associated with with monsoon depressions into seasonal oscillation related to mjo uh we see similar things in in the projections as well toward wetters so i think there's there's lots of opportunities there for using uh you know s2s science to help inform the the climate change uh science community and vice versa so i mean i feel there's been you know in the past somewhat of a siloing of of work people who work on variability in short-term prediction versus people who work on climate change projection and long-term change and part of that is due to siloing in in infrastructure so you know you've got cement for the climate change projections uh you've got s2s database or you know sub x sub c or you know nmme for for for short-term prediction but but you know these things don't talk to each other and what we need is we need more seamless data infrastructure too uh so i mean infrastructure cyber infrastructure that's something that was called out in the nas report from 2016 you know we need to uh we we need we need to have a i think a much a much more unified approach and then also taking you know that's where the connections with you know uh cloud storage cloud computing uh data revolution you know pangio tools like that uh there's there's big opportunities there for for and it's a very very fast moving area uh so i think there's i think that's something where where there's there's these big opportunities the challenges because different infrastructures being used but but something where where it's important to to to have have some focus over excellent thanks andrew david so i don't know if and i apologize if linda and was on that topic i was actually going to make a plug based on for a suggestion for a future briefing to the basque based on some of the dialogue can i do that brad quickly sure yeah and that and that is so the part that andi got into which is the international climate services so i've only spoken about our domestic portfolio and of course you know i was at the iri they do a lot of great work in the international portfolio uh international realm excuse me cpc also does a lot of international work for us agency for international development fuse net the disaster risk reduction department uh and most recently through the state department uh their initiative that has started in this administration under prepare which is disaster risk reduction in the developing world there are three foci caribbean pacific islands in now africa and i think that you know i could probably speak more than five minutes just about the work that we're doing and of course could bring along the head of that particular program so anyway just a suggestion might be useful for another briefing about s2s for the developing world thanks dave appreciate that linda yes thanks brad um so i just want to follow up on one of andrew andrew robertson's comment about having sort of a seamless prediction system across all times yes and i think we still have to recognize that there's an important distinction um about whether or not forecasts are verifiable within the timescale the decisions are being made and that's one of the big distinctions going out i think certainly whatever forecasting but out to seasonal prediction which is quite different from longer term climate change and i and i think you know the other linda linda hyrens and i had a brief interaction about this and i think it would be really useful to look at that issue in more detail from a stakeholder point of view what difference does it make whether or not the prediction is verifiable within the timescale of the decisions that have to be made and so they're i mean that's a very important distinction so there is not in one sense there is no such thing as a seamless prediction across these timescales from the point of view of the users and decision makers andrew yes yeah thank you very much linda for making that point that absolutely absolutely i absolutely agree with that and i think that's the essential point that because we do have retrospective forecasts of s2s we can see how well they perform we can verify them you can't do that for climate change projections but they're the same types of models that are used so there's an opportunity there and i think it's being proposed in a in a paper from some time ago that one could even use you know they were talking about seasonal forecasting there as a means to to calibrate the the the projections because you would have you have we have a verifiable verifiable setup but i think your point about the stakeholders that's also really important one too to make where you can make that distinction about well you know for the these s2s timescales we we can verify and that's what you cannot do for for climate change projections and just so i think that's the point you're making that you can yes you can you can really uh you can help explain these things and what what what kinds of uncertainties you have uh in the different types of products and that can help inform how they they can be used yeah thank you and and i would supercharge that that that the 8 to 14 is ripe for a simpler problem in that respect of the of the verifiable and kind of thinking about that aspect so there maybe there is some seamless aspect in the weather to sub seasonal but to this point maybe maybe not on the seasonal climate change for example well i think we really don't know yet and that is why it needs to be more carefully explored excellent well we're actually right on time and i don't see any more hands so i think what i'll do is first thank everyone what a great day i can't imagine anyone having sat through this not being particularly excited and challenged and thinking through a lot of different things uh not only just you know the the value and impact of the s2s to s arena in in the forecast themselves uh personally you know i had a little bit of time there with bear and doing global global agriculture it was a holy grail how could we get information and then we could really make decisions that would and it's across all sectors so this is an incredibly society incredibly valuable important problem to society and and then to get to sit through listening to all of the panelists and a great discussion in chat and and the opportunities you know it's just wow so exciting i love to see the the pioneers in in the ai arena that's exciting and everyone else working so hard on it so i think that what we'll do is we'll close it out thank you all for attending everyone will get a notice when the recording is done it takes a few days to process you'll get notice of that and you'll get a copy and be able to go back and review anything you'd like other than that i i believe we're done is that true katrina on where you just basically thank you everyone thanks to the national academy staff again into all the panelists and all of you for hanging out for the day and and participating so so actively so i think we're done