 it over to Greg to fill us in on CSDMS activities. Okay, thanks very much, Brad. Let's see if somebody would release the screen share. I'll share my screen. Okay, so thanks again, Brad, and welcome everybody to this online and virtual incarnation of the CSDMS annual meeting. I'm disappointed as Brad is not to be able to see you all in person. On the other hand, it's thrilling to see so many of you joining us from literally all around the world. So that's great to see. I wanna say a few words about CSDMS, both as an introduction to those of you who are new to it and an update for those of you who have been involved for a while. So CSDMS, the acronym, is the Community Surface Dynamics Modeling System. Sometimes people say it with a soft C so it sounds like systems. And it is both a community and an MSF-supported facility that is all about supporting computational modeling of earth surface systems in all their variety and complexity. The community has been steadily growing. We're close now to 1,900 members, 70 odd countries, several hundred institutions. And it's important to point out that this is an open community. So whatever your career stage, your personal background, if you're interested in the study of earth surface by computational means, know that you are welcome in this community. So the current NSF Award supports a small integration facility that's located in Boulder, Colorado, which is where I'm talking to you from. This is the, sorry, the ancestral home of the youth Cheyenne and Arapahoe people. And in a moment I'll say a few words about resources and services that the integration facility provides. First though, I wanna say a little bit about CSU Mass Science. The earth surface is a complex and exciting place and the science of the earth surface is equally broad. This slide shows just a few examples of some of many projects that are sort of recently published or ongoing that explore different facets of earth surface dynamics using systems tools and technology. And they range very widely from permafrost to natural hazards to landform evolution. A particular interest here, of course, are things that have to do with life and landscape and seascape. So for example, how do we cycles in climate and vegetation influence landscape evolution? Or can changes in grazing and wildfire regime trigger woody plants encroachment? We're gonna see some exciting developments in these I think over the next two days. I wanna point out that the systems community is organized into a set working in focus groups about a dozen of them. And I wanted to welcome and thank the new chairs and co-chairs who joined us this year. They include Leah Mayo from the University of Central Florida, Susan Emery, Julia Moriarty from CU Boulder, Kevin Shue from LSU, Olaf David from CSU, Pristina Benderagoda from University of Washington and Derek Robinson from the University of Waterloo. So thanks to all of you. Thanks to the outgoing chairs for your service and contributions. I also wanna welcome three new members of our hearing committee. They are Pat Weiber, the University of Virginia, Khadidia Thiero from NCAR, Paulo Pasalakwa from UT Austin and Michael Barton from ASU. Thank you for joining us and contributing. And let me say a word of thanks to to the integration facility. Actually, I'll do that in a second. First, I wanna point out this year's award winners. So here at the annual meeting, we offer awards both at the senior level and at the early career level. At the senior level, this year's winner of the Lifetime Achievement Award is Professor Gary Wildwoods of the University of Newcastle in Australia. And Gary has recognized there is outstanding intellectual leadership in modeling, landform evolution, soils, data dynamics, weathering, vegetation and a variety of integrated related processes. Gary, as many of you will know is essentially one of the founders of modern landscape evolution modeling as well as probably the founder of the field of applied landscape evolution model. So congratulations to Gary. We also have at the early career level the Sivitsky Student Model Award. This is given as a jury award, students apply with a portfolio that includes both the science and the software code behind their models. And the applications are reviewed on both of those counts. This year's winner is Ian Reeves from the University of North Carolina, Chapel Hill who was selected for his submission on the impacts of seagrass dynamics of long-term evolution of barrier marsh bay systems. We'll be hearing from Ian later this morning or whatever time it is in your time zone. So congratulations to both awardees. Normally tonight would be a banquet at which we celebrate them in person. And unfortunately that's not gonna happen this year but we look forward to celebrating with these folks and you all sometime in the not too distant future perhaps next year. Okay, the integration facility as Brad mentioned is a small group of people based here in Colorado who work hard to provide resources and tools for you and are responsible for putting on this meeting every year. And I wanna thank them especially for the late game pivot from what was to have been an in-person meeting to this online format. Thank you very much. I also wanna say a few words about some of the resources and so on that this group provides to you all the research community to help you do your research more effectively, more efficiently, more sustainably. And it's useful I think to do so in the context of scientific computing broadly speaking across the sciences. So one thing that's clear is that scientific computing is here to stay. Presidential advisory committee several years ago called scientific computing a new pillar of science taking its place alongside the traditional modes of theory experimentation. And those of us in geology and ecology would add direct observation as a cornerstone or a key method of doing science. And that's a big deal. And we've seen the growth in computing in any number of prominent discoveries just in the last several years. So for example, the black hole imaging that was widely celebrated last year was in part a telescopic discovery but also in part a computing product. You may or may not know that the Nobel Prize in Chemistry in 2013 was awarded for numerical modeling. And closer to the earth and environmental sciences of course we see the gradually increasing capabilities of hurricane forecasting. And we have now the first at least two national scale operational hydrological models. And those are just a few of many, many success stories around scientific computing. But this being a really new area of science it's natural to expect that there are some changes that it brings and some growing pains too. One of those changes is that today software is infrastructure. Software that we use for our numerical modeling for our data processing and to operate instruments is every bit as complex, intricate, worthy of and indeed demanding of careful engineering and maintenance as the more familiar ships and planes and accelerators and flux towers and whatnot. Indeed none of those pieces of equipment would operate without software. And yet one of the challenges with software is that it is invisible infrastructure. And it is maybe partly for that reason that there have been some growing pains. Some of those growing pains have arisen around the idea of reproducibility. So if reproducibility is a cornerstone of science there's been any number of calls of alarm around the potential for the lack of reproducibility in much of the computational work that we do. So a number of years ago, Randy Lavec at the University of Washington commented that science and math journals are filled with pretty pictures of computational experiments that the reader has no hope of repeating. And that was over 10 years ago had things gotten better since. In fact, if you look at studies of reproducibility there are still some worrying signs. One study for example that I'm showing here on this slide from just two years ago attempted to reproduce the results from some 300 articles in Journal of Computational Physics. They were only able to get the digital artifacts without authorist systems for like 2% of them and unable to easily regenerate all the findings for any one of those articles. So we have a ways to go in reproducibility and reusability. There have been concerns raised too about quality and related issues of productivity and sustainability. In other fields, fortunately, not in earth or environmental sciences so far as I'm aware, but in other fields there have been some prominent retractions of results that were found to be flawed because of software bugs. And that's not a good situation to be in. And in trying to diagnose some of the reasons behind these issues and exploring how it is that research scientists use software, various studies have unearthed some interesting findings. It turns out if you survey scientists who work with software, for example, the gluten modeling, they will often report that they spend as much of a third of their time working with debugging, writing, et cetera, software. And yet if you ask, how are we taught and trained to do this, the answer is usually that we're self-taught. And very often we're unaware of tools and practices that would make our lives easier and make our products more reliable. One study that I thought was interesting, like in scientific software creators as being a bit like castaways on a desert island left behind by the modern world. And when rediscovered, the rediscoverers, software engineers are appalled at the primitive conditions in which they operate, even if they're quite innovative at making it all work. So we have some challenges along the road. On the other hand, the good news is that gluten solutions to address some of these problems not only are on the horizon, but I think are already starting to exist. One of the classic obstacles has to do with incentives and credit. We write software to do research, but it's the research that we're judged on. But one potential solution to that is a growing number of journals that are devoted to research software, either a domain or a general. And some of these, in fact, have the software itself as the published product. In other words, you're presenting the software's documentation, its tests, and that is what gets reviewed. The Journal of Open Source Software is one example. So that's a good development because it provides those of you who write software as part of your research with a vehicle for getting the credit you deserve. At the same time, we have an increasing number of journals and funding agencies who are insisting on higher standards. American Geophysical Union, for example, now considers software as part of your data. And they will no longer accept data available or software available on request as a solution you need to put it in a trusted repository. At the same time, we know that despite concerns about quality, it's been demonstrated that research software can be extremely high quality. One study of climate models, for example, showed that they have very low defect densities compared with other kinds of open source project. Really reliable software is possible. We have the tools to do it. And there is a growing literature that provides all sorts of advice, recommendations, tools, and techniques for scientific programs. On the other hand, it's not that easy necessarily as an individual to sift through all this and to put it into practice in your own work. It helps a lot if you have availability of training, enabling tools, and expert support. And this is where CSDMS has tried to make some contribution to the Earth's surface science community. So I'll say a couple of words about resources and services available for CSDMS. The integration facility really works in three areas to support your work. It's community, computing, and education. I'll give a couple of examples in each of those categories, starting with community. As many of you know, CSDMS runs a public repository. So this is an online, open, and accessible database of model programs and tools written by and shared with the community. And the repository includes a rich set of metadata for each tool and model. There is a pathway to add a digital object identifier if you wanna make your product citable. You can add references as part of the information. We currently have over 2,000 references in the database and systems will even track a citation index, an H index, for the program, not for an individual, but for a particular model to give a sense of how widely embedded is that model in a particular community. Currently, we have around 350 models and tools and it's steadily growing. We also provide help with projects and proposals. So, for example, if you're writing an NSF proposal and you'd like to have well-tested, sustainable software as a broader impact of your research, we can help you come up with a plan to do that. We also help with various projects, sometimes through collaboration on proposals, sometimes through site visits, though that's unfortunately not happening for the immediate future. And we can provide research software engineer as a service consultant. We have a certain capacity to do that pro bono through our current NSF award. And if you need more support than that, we have a mechanism to provide you with a budget for a certain amount of service assistance. In addition to all that, we run a user help desk to answer various kinds of technical questions that come up. So that's community. Let me say a couple of words about computing. You know, one of the original visions behind CSF was established back in 2007, was the idea of a modeling environment with a community built suite of integrated software modules that are targeted toward predicting erosion, transport and accumulation of sediment and solutes, landscapes and sedimentary basins over a broad range of space and time zones. Now that's an ambitious project. And it's not something that a very small facility could ever hope to do on its own. It's a community effort. The idea is that we need something that is built both by and for the community with the integration facility, providing a little bit of pinch of support to make it all work together. And really there are sort of four technical ingredients that are part of this system. And I'll say a few words about each one of those. The first is an interface standard. So it turns out that standardization is key to being able to operate things together. Just as you need the right screwdriver head to fit in the right screw, you need standards to operate in miracle models. And it becomes much easier to couple them or simply even to work with them when they provide a consistent interface. So to that end systems provides the basic model interface or BMI. And really what this is, it's just a list of functions. And if a model provides these functions that can be said to be standardized. And their functions like initialize to start up the model and read its inputs, update to advance it in time. If it's a time advancing model, get value to interrogate what are its state variables at the moment, set value to tweak or adjust those if you want to do coupling or data assimilation or simply to play around and experiment and finalize the cleanup. So those are five of what ends up being about 30 odd functions. The second technical ingredient is language interoperability. The communities judging by the codes in the model repository have a certain set of languages that they like to work in. And it's important to be interoperable to have a lingua franca that could connect all these. So systems has developed some in-house tools that will provide a Python front end. So Python is the lingua franca to be able to execute programs written in Fortran or C or C++ through a Python interface. And maybe someday we'll add R to that collection. I know that's popular one in the ecosystem community. Third element concerns, what if you want to write a new model? That could be a daunting task. There's a lot of potential headaches in doing that. And so we've created a software library that helped make that easier at least for two-dimensional grid-based models. It's called LandLab. And it's a Python language programming library. And it handles things like gridding and layers of data and input, output and things like that. And it has the ability to encapsulate numerical solutions to particular processes as reusable components. So this slide shows images of some of the applications that have been published or are underway using LandLab model. It ranges from watershed hydrology to tectonic gene morphology to vegetation dynamics, landform evolution. And then finally, the fourth ingredient is a framework in which to explore, couple, run, plot, analyze models. The current iteration of systems modeling framework is called the Python modeling tool. So here again, Python is the lingua franca. Python modeling tool is simply a little Python package through which you can instantiate, run and explore any one of currently a collection of 16 legacy models written in different languages under the hood. And these cover a range of different kinds of disciplines. We are now also working on data components. So these are components that operate the same way but they fetch chunks of data from a particular data source and LandLab components of which there are a few dozen can also be operated this way. Let's put Python modeling tool. Lastly, let me say a couple of words about educational resource. Those of you like me who have had to pivot from teaching in person to teaching online know the value of online teaching resources. And so we have worked recently to try and clean up and curate a collection of online lab exercises. Many of them use Jupyter notebooks that involve modeling that students can do directly through a notebook through a server that we've set up to explore all kinds of different systems. In addition to the labs, we have a webinar series that some of you have contributed to that covers a range of different topics from the technical computer oriented to the scientific. If there is a webinar topic that you'd like to see let us know and especially if there's a topic that you would like to offer, let us know and we'll get you on the schedule. So that's a really quick tour of some CSTMS tools and resources and with that, I'll pass the torch to Albert. Happy to answer questions via chat if there's time for that one over to you.