 So, welcome. This is our first meeting this time of year. I hope you enjoyed it. We heard complaints because last time it snowed, and so we thought, well, we'll pick me and then it's snowed last week. Well, I mean in Boulder, it can snow any of the months and also can be very warm in January. So, who knows? But I think we're going to have good weather. We have seasonal rains coming in, and so it could rain in an evening thunderstorm or something like that. So, just know that you could get wet and that's okay. So, I've got a little PowerPoint presentation. I'm going to lead you into what this meeting is about. I've cut my normal talk down by 50 percent. Next year, I'll cut it down by another 50 percent, and maybe the following year you won't even see me. Okay. So, half the talk is for those who've never been here, and the other half is for those who've been here all the time. So, you're part of what the US government says is the 21st century cyber-infrastructure in environmental science. There's three components to that. There's the observing side of things, and that's led by big organizations like Neon and CZO. Neon's the big one. That's about a quarter of a billion dollars. But Neon, CZO, there's many others. LTE, other R, and other organizations like that. They're designed to sense the information, to get the data that one might need. And for a modeler, you are attached to that either directly or through what's called big data, dark data, crazy data, crazy wisdom data, and you're to use that in ways that would help you in your modeling efforts either through input to the data, verification of the validity of the models, training of the models, whatever. So, that's the cyber-infrastructure that you're part of that. Again, for those who don't know this organization, this is our governing structure. You'll see that there's a steering committee and executive committee. The executive committee is made of the chairs of the various working groups. There's five of them and the focused research groups, there's six of them. And there is a facility here in Bowler and people are funded individually. And just to give you a sense of that, we've written over 130 support letters for projects and over a third of them have been funded. So, for those of you who've been funded, good for you. Those of you who've had it rejected, one of your proposals, then keep on trying. So, I view that aspect as that these meetings are part of the incubator side of CSDMS, designed to keep you going, keep you working together, keep you meeting one another, keep you staying current. So, there's a little bit of where we are in the world. The institutes that are part of our structure, there's over 500 institutes. And it's divided into a number of working groups and focus research group. The working groups are in black, I believe. And the focus research groups are in red. We've got three that we launched last year, critical zone, geodynamics and Anthropocene. And so, people are starting to populate those working groups. We have a few other initiatives that are trying to get off the ground and we need all the help we can to get them off the ground. One of them is an ecosystem science. We are partly funded, a small amount of money from the biosciences and the National Science Foundation. And we are trying to reach out and create another working group on that topic. This is a pie chart of the money that we've had for the first five years. We're now entering year seven. And the big blue portion of the pie is sort of the core funding that we get. And we've got that possibly a little bit more for this next round. And it's up to the integration facility to pull in any other funds to make this whole operation happen. So, we provide a lot of workshops and symposia and give a lot of presentations. But I think particularly we're trying to develop this as a community. And there was a program manager who couldn't make it, who was going to tell you all this and now he told me, for me to tell you all this, that they want this community to work together. They wanted to not necessarily speak with one voice, but they certainly wanted to speak as a community. And to work together, the agencies like the idea that we're self-organizing ourselves the way that we're doing it. So keep up that kind of communication. And there will be discussions later today that we'd like you to participate in and develop your own communities, your own wishes, your own desires. Part of that today is that we're going to ask you to look into your various focus research groups or working groups and to pick a model or two in the next year and wrap it. It won't take log, it'd probably take two or three of you a day or two or three, depending on the complexity of the model. But that would allow that model not only to work within our systems modeling framework, but it would work likely be able to be imported into half a dozen other modeling frameworks. So that's one discussion we want you to have. And again, as you develop and write these models, NSF tracks this, other agencies do send in your new code. We do support you running your code one way or the other, either for development or for solving problems on our high performance computers. We have a couple here. We have a learning one, it's 700 or 800 cores, and we have a bigger one, 8,000, or however many it is, doesn't say, 8 teraflops. Now 150 teraflops, there we go. Anyways, you have access through this community to these resources. And if that's not enough, there's another community computer just up the road on the border between Colorado and Wyoming, it's called Yellowstone. That one has another order of magnitude, so it's almost two pedaflops per second. So there should be no lack of computing power for our community. We're happy to host any of your products, and they may be simulation models, or you could download from our website these products and use them in your classes if you happen to be a professor or instructor. We would welcome you to put any educational products on our site for others to use, these include clinics, labs, other things like that. There's lots of ways you can contribute to some of our resources. The integration facility that's here in Boulder, we host students and postdocs and visiting scientists. We're really designed and set up to host visiting scientists because the resources have to come through our other grants. If folks wanted to work here. But please, again, think about that and possibly participate or tell a friend about these resources that are available. We are fortunate enough to have Jeff Kars, the senior editor in chief of computers and geoscience, who's going to be talking in a few moments. Systems has put out over the last decade or so, decade and a bit, three special issues were planning with all your support to put out another special issue. And so we're asking Jeff to come and say a few words about that, what that journal is and sort of how that's all going to go for such a special issue. But the ones that we put out have been quite popular in terms of getting citations for each of the papers. So you may want to consider submitting and it's going to be on our conferences theme. We think uncertainty and model intercomparison are great areas for us to move into. So basic modeling interface, I think that Scott, is that you? Scott is going to be giving one of the clinics on this. So here's the idea behind this. Scott, Eric, Mark, others of the software engineers that have been working with us, Bechuan, Jisama, they've scoured the community in terms of modeling coupling frameworks and said, what's common about all of these coupling frameworks? And this coupling framework, this commonality could lead to something that is called wrapping a model in a basic modeling interface. And if you could just wrap that, then submitting it to our framework means that we automatically can just wrap it and run it and you would have access to all the service components, the framework itself, and it would be able to be coupleable to other models that are in, that it should be able to be coupled with in our system. But, and here's the most important thing. Those other frameworks will be able to grab it too and use it in their frameworks and so you would then be able to propagate your modeling efforts in a much wider way. So we'd like you to talk about that today. Start picking and choosing and start working together. This is something that the community is now having to do. Put a little time and effort into getting your models wrapped in a basic modeling interface so that the rest of us can make use of it. When you do that, it will provide you with what we call these service components. And there's more and more of these that are in our system. And we are running another clinic on the web modeling tool. We put a lot of effort into making this a very battle hardened set of codes so that anyone will have a hard time breaking this. And where before we were into the proof of concept, now we're no longer into proof of concept, we want operational code. But it would allow you to have models that are in one language talking to another model in another language. You may have a raster grid talking to an unstructured grid. All of these services and components working on a high performance computer would be yours. And they will be walking you through and showing you what some of these new ways in doing things are. So I'm not going to put any time and effort into explaining this. They will hopefully do a job. But for each one, in addition to wrapping your model, there's the metadata that comes with it. And this metadata is becoming more and more and more important as web crawlers and others start to really grab hold of and try to keep track of who's writing what. And we also want the ownership of all of these models to stick with the person who wrote it. I think that's very important. And we also don't want it to be a black box so that when you click on one of these things and you get the metadata and you get the list of all the equations, you stop the black box mentality that it seems to be developing where people use models but they don't know anything about the models themselves. So you can help us. You're the community that writes the models and uses the models. So you can help us make sure that we have the proper metadata. Last year, we had a keynote talk on Dakota. And we had that as an early indication of why we want to go down this road to deal with model uncertainty. Our community is demanding more and more of our modeling efforts, but they want to know how uncertain our predictions, our simulations are. Not every model is designed to have to deal with these kinds of model uncertainties because they're more thought experiments. But for many models that are operational or could be operational, we certainly want to know how error propagates into and out of the system. That means there's errors that are coming in just from the boundary conditions and if conditions. There's errors in the model themselves and the choice of the processes that we choose to model. And there's errors in how we propagate from one model to another model within the modeling framework. And then we're comparing the output to data that them, themselves has errors. So I think that we're going to have another clinic on Dakota. We're still looking for the magic bullet. The structure that would allow us to have a service component and maybe Dakota that would be available to all the models that we couple and use. So another topic is that we're going to try to have our communities break up into the various working group and discussions groups. And talk about model into comparison. Now some of our communities are really big in this. If you're in the ice community, the glacier community, you tend to be probably ingrained in part of model into comparison efforts. If you're in the soil community, there are these model into comparison efforts that are going on already. If you're in the ocean community, there certainly have been experiments that have been run. Maybe not at this generation of ocean models, but maybe ocean efforts in the past, or models of the past. So we're looking to somehow get this community to start thinking about what exactly are the types of models out there that could be compared one to the other. And then so that we all go in with the same knowledge. This model, it's slow and it's not very good. And this model's fast and it's just terrific. Let's use that one. Or this is slow, but it is so accurate. This one's fast, but it's just spitting out nonsense. You know, that's another way of comparing models too. But I'm joking about this, but at some level we do, I think, setting up model into comparisons is a good thing. You have to identify the models, but you also, as a community, has to start thinking about, well, what's a fair experiment? Because if you pick this experiment, you may favor one model. If you pick that experiment, you may favor another model. And so the choice of model, the choice of data, I think this is all part of this discussions we hope to have here. I think Scott will be also in his clinic be talking about standard names. I don't want to say anything, but at the very least, you have to understand that if we had set all of this up and we never dealt with standard names, that means every time one model wanted to couple with another model, you would have to have had figured out what that model called discharge or what that model called velocity or what that model called whatever, tectonic uplift. And you would have to have created your own dictionary so that model A could talk to model B. So in the standard names, it's just coming up with some structure where you don't change the parameters in your code, but you do say, well, here are the parameters that I'm going to offer up to some other model. Or here are the things that I would allow another model to reach inside my code and maybe make changes. And here's the units that I may use or here's what I may call it. So Scott will talk to you about that, but it's still important. All of our coupled models now within CSDMS have this in the background. You never see it up front, but it's there. And I think it's very important. We'll walk you through that. Finally, just online recently, we've just got so many observational systems pumping data our way that we could start making changes even in how we model things. What you're looking at in this first set of code is it was basically a dam break in Papua New Guinea. It's a copper mine. The whole thing released. And all that tailings ran down. It took a number of years. Each one of these images is one year. The one in the middle is the Mississippi. And you're actually seeing subsidence before your eyes. You're seeing the south pass just disappear, sink below the water line. And again, this is over 1984 to 2012. Each image is a year. You can see meanders. But this opens up so much possibility. You can start modeling one of these systems and maybe update your model work, ingest the new domain into your system, continue the model run. Maybe update it again. Maybe learn from how your model didn't predict what was happening. There's all sorts of possibilities with these new model observational systems. This one is the growth of a delta. There was no delta in 1984. Now there's a delta. Because in Madagascar, they chopped all the trees down. All the sediment came down the river and created a delta. And so, again, observational systems working with the modeling community, us, I think, is rich ground. And we should pay attention to that. So getting on to the final statements of the meeting. So this is what we have today. We have a few welcoming talks, Pat's mind, Jeff's. And then we will have our first keynote speaker, Peter. And then we'll have so two keynotes this morning, our group discussions. The group discussions, we've got four rooms. And we've got 11 groups. So I'm going to suggest that we have some multiple groups at the beginning. Hopefully the chairs are going to coordinate a little bit of this. Maybe they'll share some stuff. And then you could break out into some different sections of the room to talk about your own needs in your own community. Do you have some models that you would like to think are good targets to being wrapped into a basic modeling interface? Or any other topic you may wish. That's this morning. And then after lunch, which is going to be out here, there's going to be three clinics. Hopefully you signed up for them. And then there's some more keynotes. And then, of course, posters. Let me say a few words about posters. Every year, we are highlighting more and more and emphasizing the poster, giving it more time, more opportunity to talk with one another. This is the true incubator. But I wanted to bring out not only are you to be out there looking at the posters and talking and reading and drinking beer, but we paid for the beer out of our own pockets. And so we will be doing what we did last year. There will be a hat for you to contribute some of the money so that the staff at the integration facility doesn't have to bear all of the cost of your beer. And you can contribute a little bit. So that's statement number one. Statement number two. There will be a prize, a poster, best poster award. This will be announced at the banquet tomorrow. And so you need to vote. And since there will be some posters today and there will be some posters tomorrow, don't just vote on the posters tomorrow, but pay attention to the ones today. And you know what they say, vote early, vote often. Seriously, just vote once, but pay attention to the best poster. They get a nice prize. So that's today, tomorrow. More keynotes, more clinics. We're going to do the model intercomparison discussions, more keynotes and posters, and then the banquet, where we're going to honor the Lifetime Achievement Award and provide the best poster award and the student scholar student award for the year. And finally, Thursday. Again, that'll be keynotes, but we'll have this discussion on uncertainty. Think about various ways of handling it, because we want to put a big push on this. And this will also hopefully lead into how we are designing this special issue. Clinics, reports, and wrap-ups. So we're hoping that for each of these discussions in each of the rooms, there will be a rapporteur. And there will be a master rapporteur, and that will be the person who then reports back on Thursday. And so there may be some breakout rapporteurs, note-takers, who then should feed that into the master one. So also be aware of that. Any other logistics, Albert, Lauren? So the back of your name tag has important information. It tells you who you are, where you are, why you are, how much you're worth. This here, $250,000. Good. OK, so you should, in my case, it doesn't have anything. So I suggest I'm not going to any clinics. Now, I'll go to all of your clinics. OK, so final slide. So our meetings are about state-of-the-art surface dynamics modeling. This is our community. We're going to deal with this through posters and clinics. Learn the latest, the best, the brightest. Maybe even develop some incubator stuff and write some proposals together. This is a learning opportunity. Take clinics, sit in these clinics, learn from the masters themselves. Every year, we put on a suite of clinics. It's about this number, about 10, 12 clinics. Sometimes we invite people to come back and give it again. And sometimes we wait a year before we ask them to do that. We're totally aware that we put good clinics at the very same time we're putting on another good clinic. I mean, that's just the way it works. So that's why we have people come back in a year or two and maybe give it again so that the people who missed it the first time. We will have these focus group activities which is increasing the number of basic modeling interface components, advancing the sub-discipline fields through modeling comparison projects, and understanding model component uncertainty. So with that, I'm going to end it here with four minutes. Free to ask me any questions before I turn it over to Jeff. And I've been told that if you have a question to go to the mic because this is being live streamed, and if you talk into the mic, then our friends from away can also hear your question, not just my answer. Is it working? Yeah. Some nice animations. Are they available? Yes. So we're going to put, I've got about 50 of them. I don't know how many we have on our site. And I'll just keep making them. But I guess my point is that as modelers, we should start thinking about data ingestion, updating models, updating parameter space. And we now are starting to get the observational products to allow us to do that. But the simulation certainly. There's some great ones. One is a meander that is approaching a town and whacks into the town. You should see that one too. OK. Other questions? Mary. Hi. I just wanted to say that, well, what has fascinated me during my career is using uncertainty and sensitivity analysis to really look into the model, to make things more transparent, make it more transparent than is otherwise the case. And often I get surprised about what's important or what's driving what. So I just thought I'd put a pitch in for that. Thanks. Yeah. I think that's a great thing. And you also, you sort of know sometimes, because we pick and choose the number of processes we model, and we don't model all the processes, models or simplifications, we sometimes get the right answers even though we don't have all the things in, and that's kind of crazy too. Which means maybe they weren't that important, or we have something else that does magic. Other questions?