 keynote speaker is Michael Dietz. Michael is professor at Boston University and director of the ecological forecasting laboratory, where he combines field work and numerical models to tackle his science questions. Today Michael will talk about the 21st century science for 21st century environmental decision making, the challenges and opportunities of near-term, iterative, environmental forecasting. Michael, you can take it away. Thanks. So in an era of rapid environmental change, we can no longer manage the environment based solely based on historical norms. We need to be able to anticipate how systems will behave in the future, both under the status quo and under different management scenarios. Because of this, forecasts are going to be critical for effective 21st century environmental management, and they are an imperative way to make our science more relevant to society. I can't speak to the full breadth of the disciplines within CSDMS, but as someone trained as an ecologist, I find that most of the modeling in my community has been focused on longer time scales, primarily climate change responses. And those are time scales that do not necessarily match with the needs of many environmental decision makers. And an iterative approach to forecasting involves confronting predictions with new observations and updating your predictions based on that new information and learning. Here I'm going to argue that this iterative approach provides an important win-win because it focuses predictions back onto decision-relevant time scales while providing feedback in a way that embodies the principles of the scientific method and accelerates learning by continually testing hypotheses that are quantitative, specific, and therefore falsifiable. In some cases, this iteration approach can occur on daily or sub-daily time scales, such as these forecasts, one of vegetation phenology and the other of lake turnover probability, which are both forecasting critical system transition dates. Such forecasting approaches are increasingly possible due to advances in sensor technology, network science, remote sensing, and community science citizen science initiatives. In my own lab, we are developing a wide range of forecasts, including forecasts for land surface, carbon and water flexes, vegetation phenology, aquatic productivity and algal blooms, ticks in their small mammal hosts, and the soil microbiome. Looking across this broad range of forecasts, we are interested in better understanding the patterns of predictability across systems. In particular, the predictability of any system is going to be determined at least in part by how fast uncertainties grow from some initial estimate of the uncertainty through time or space until we hit some background level that kind of defines the limit of predictability. That rate of growth is itself going to be affected by things like initial conditions, such as drivers, parameters, statistical random effects, so heterogeneity, variability, and process error. It's really important to understand which of those uncertainties dominate different types of forecasts because they have large impacts on how we understand systems and how we can make predictions about them. My group is particularly focused on terrestrial ecosystems where we've been working for the past 10 or 12 years on developing PGAN, which is a multi-model informatic system which supports data ingest, model execution, analysis, calibration, benchmarking, and the archiving of both model outputs and the repeatable workflows that underlie them. PGAN is the foundation for our data assimilation system that we use for iterative forecasting, and this integrates multiple uncertainties such as this showing illustrating initial condition uncertainty, driver uncertainty, parameter uncertainty, integrating those into probabilistic forecasts. We then update those forecasts with a range of different observations, depending on the scale and scope of this particular study. In this case, using a novel Tobit Wishart ensemble filter, which is a generalization of more things like the Kalman filter approach and some of the Kalman filters. And I wanted to briefly highlight three applications of this system. The first was an analysis that's now in a preprint that is partitioning uncertainties in a multi-decadal hindcast of carbon forecast, in this case, primarily looking at above ground biomass. And we find that our ability to predict above ground biomass was dominated by initial conditions, which are represented by all the hashed areas. The initial condition uncertainties are very interactive with other uncertainties, as well as process error. I would argue that this is an important finding because it calls into question a lot of the status quo in the modeling approaches that are used in the terrestrial carbon community, which frequently ignore both of these uncertainties. We very rarely estimate and propagate process error into our process-based models, and frequently these models are initialized by a spin-up that's really not tenable if initial condition uncertainty is the dominant uncertainty. We're also using this system to make automated forecasts of carbon water fluxes. They're driven by 16-day weather forecasts. As you can see here, these are true forecasts into the future. We can then validate on a rolling basis with things like eddy covariance data. And we've also scaled up these approaches to a continental scale, where we're applying them in a historical reanalysis, which is different than just a hindcast because we're actively assimilating observations of different carbon pools and flexes through time to generate a synthetic fusion of models and data over time. Very analogous to how many of us work with atmospheric reanalysis as drivers to other models. More broadly, near-term forecasts are emerging across the field of ecology, but up to now, these have largely been isolated endeavors with work going on in aquatic communities, in isolation from disease efforts, in isolation from terrestrial ecosystem efforts. And we really have come to believe that this lack of community has been slowing our progress because of a lack of sharing, a lack of communication, a lack of community tools, and a kind of a lack of overarching shared theory about ecological predictability. So in 2018, we launched the Ecological Forecasting Initiative, which is an international grassroots consortium that aims to build a community of practice around iterative ecological forecasting with a vision of using forecasts to better understand, manage, and conserve natural systems. At its heart, FP is working to advance this idea that iterative forecasts provide this win-win about simultaneously allowing us to tackle grand challenge scientific questions about overarching patterns of predictability in nature while producing societally relevant forecasts that improve lives and livelihoods. Most of FP's work comes from its working groups, which are organized around a series of cross-cutting themes that represent areas of shared interest across the community, regardless of the specific sub-discipline that one is working in. So working groups on things like diversity, education, partnerships, theory, social and decision science methods, cyber and cyber infrastructure. The partners working group is really focused on building bridges that span academia, agencies, industry, community science, and stakeholders, while the social and decision science working group is focused on how forecasts could be used to improve decisions, as well as how they're actually being used in practice. Our diversity working group is focused on building a diverse, equitable, and inclusive community, while the education working group is focused on the development and refinement of open courseware and supporting educational opportunities in this area. One important one that we're really proud of is the development of a series of videos that we co-produced with NEON, the National Ecological Observatory Network, on the fundamentals of ecological forecasting. Our cyber infrastructure and methods working groups are focused on the more technical bottlenecks of producing forecasts, things like developing open archiving standards for forecasts, common tools in community cyber infrastructure, and kind of trying to achieve an economy of scale across the community rather than a lot of independent development efforts. And then finally, our theory working group is taking this comparative approach, similar to what I talked about in our own research, to better understand when and where nature is predictable and looking at the common features that affect the predictability of ecological forecasts, whether that be the evolutionary phylogenical, phylogenetic history of this system, or the constraints of the physical environment itself. So there's a lot of ways to get involved with FU. We have a lot of great information on our web page, decoforecast.org. I encourage folks to sign up for our newsletter. And I also wanted to really mention that just last week we had our own virtual meeting that was supposed to be in Boulder as well, where we launched an open forecasting challenge using neon data, which spans a wide range of different systems, terrestrial and aquatic ecosystem and population, including things like ecohydrology that might be of interest to the members of the community. With that, I guess I'll take questions.