 All right, thanks very much for that introduction and thank you all to the organizers, to everyone here. I was actually quite surprised when I first got the invitation because I'm not from this community. I'm someone who works with researchers across lots of different types of disciplines to try and do what it says on the back of my t-shirt which is try and ensure that better software is leading to better research. But the more I kind of looked in, I looked at some of the other presentations that had been given at previous annual meetings. I realized there's a lot that's similar in the work between both CSDMS and the work that the Software Sustainability Institute does in the UK. And also you as a community, what you're trying to do around using modeling to solve some of the biggest problems that we're going to have in the next few years. Is everyone okay with my microphone just now? I realize it's cutting in and out a little bit. Great. The other thing that was really lovely to see was when Greg was introducing today, he mentioned research software engineers. And I realized research software engineering is something that we've been a big advocate for. And the last time I was in Boulder was for a conference where we managed to persuade a number of funders to support research software engineering as a practice and it's led to where we are now where we're seeing research software engineers at universities, at laboratories and providing help through the help desk. So this is a fund returned Boulder for me but that's not really what I'm here to talk about. What I'd like to talk about a little bit is where I think things are going in terms of a couple of policies and processes that might be of interest to you as a modeling community. So one thing that's happened in the last couple of years is that at the highest levels in government people have started turning their eye towards science and what it can do for the world. Two of the largest organizations that promote different types of what I'll call political policy have made statements around the role of software in doing science. So in particular, it's the OECD which is looking at economic policy and UNESCO which is looking at scientific and cultural policy have both said that the sharing of software and the ability to see it being reused by others is fundamental for the way that we do science today. So what does that mean for us as practitioners? What does it mean for us to do this work? I think the problem we have is that for a long time science has never really had a culture of making sharing something that is incentivized. If you're looking for something that is both horrifying and funny and also in some ways uplifting I urge you to hunt out the work of Victoria Stodden and in particular, one of the papers that her group wrote looking at the change in the way that science policies in some of the publications. So in this particular case, the magazines sorry, the Journal of Science which introduced a policy on data and code sharing back in 2011. She went back and took a look to see how many things were being shared, whether the data, whether the software was available and some of the responses that she got in terms of asking for that code and data were interesting. I won't read any of them out but you can see hopefully some of them are not necessarily the ones that you might wish to see and some of them are downright disturbing to be honest and show a level of disregard for the fact that it's actually students who do a lot of the work here and not being able to share your data, your code of students is really not helping to further the case for science. So it's all very difficult. It's all very disincentivized. So what can we do to help this? The work that we've been doing has really been trying to understand how many people use software in their research and how important that is and where people get the training and where people gets the understanding, the skills to actually do the better software that leads to the better research. Work that's been done in the UK and also in the US is showing that the majority of people, so it's almost like 70% could not do their research without software and it's about 90% that are using software in some way. One of the challenges we have is that most people are taught informally about how to develop software, how to develop models. And so what we have is a set of people who are effectively learning as they go along and there are things that are helping this. Hands up, who've heard of software carpentry or data carpentry in the room? Excellent, so that's a really good sign. If you haven't heard of software carpentry or data carpentry, turn to one of the people that put their hands up in the break and ask them about it because I think this is something that we've been really keen to promote in the UK and internationally, the carpentries as an organization isn't just about training people in code development and data management. It's also about building a community of people who are willing to help each other in learning how to basically do the better software, better research. So yeah, speak to people in the break if you've not heard about these things. But what we do goes beyond just the sort of training aspect, the Software Sustainability Institute. It was set up in the UK by the research funders to try and understand what are all the different challenges that we're facing the people they were providing funding to. And now what we're doing is a whole set of different things. We do software consultancy, including with a lot of the different groups in the UK who are creating models like you are all doing, training for the carpentries as I mentioned. But in particular, two things I'm going to talk a little bit more about is the work we're doing for community development. So our fellowship programs, our events and so on. And in particular, the things we're doing in terms of international policy, trying to understand how we can make those little nudges, those little changes that will help all of us achieve what we think we need to be doing in terms of better sharing of software and processes and techniques. So I'm going to talk mostly about two different things that we've been working on. One is the fair principles for research software and the other one is around software citation guidelines. So I'm just going to start with the fair principles. So yeah, actually this is a good point. Who have heard of the fair principles for research data? Okay, so not as many as the carpentries, but there's still a reasonable number of you there. So the fair principles for research data came out of work to try and understand how do we balance the production of data as part of research with its reuse. And they enshrine three different, I'm sorry, four different foundational principles that things should be findable, they should be accessible, they should be interoperable and they should be reusable. And particularly where I'm based in Europe, there is a very strong push from European funders to ensure that people are making their data available through fair principles. Fair principles are similar to open, but they're not identical. So there's this concept of things being as open as possible, but as restricted as necessary. And that's often the case in some of the areas that I work in, such as medical informatics, where what we're trying to do is share as much as we can with the understanding that some things can't be shared. So the fair principles are really trying to understand how we can help people to make their work more findable, more accessible, more interoperable, and in particular, more reusable. And what I've been doing alongside a whole set of other people from across the world and across the community is understand how we can make this apply to software as well. So what do the fair principles mean for research software? So working with, in particular, the research data alliance community and also the force 11, which is the future of research communication. What we've been working on is defining what the fair principles, and I realize that you will not be able to read the actual individual principles here, mean for research software. And what we're trying to do is understand the differences between software and research and try and break them down into the steps that people can take to improve the way that they are undertaking on the development of software. So this is maybe an easier way of looking at things and possibly one which is a little bit more readable. What we're trying to do is look at the findability of software. So how much people are using persistent identifiers such as DOIs and descriptive metadata. So it was lovely to see the bit that was up there around linking of models to both data that's in schema.org schemas and also to the associated research publications as well. Making things more accessible. So making sure that both the software and the metadata are easily retrievable. Making things intropable. And I think this is something that is of a particular challenge and benefit to this community because of the way in which models need to be coupled and made intropable the different data formats you're using. So making things intropable using different application programming interfaces, standards and references. And ultimately making work reusable. Making software reusable by documenting it, licensing it and following community good practice. But all of this is kind of quite formal and it's quite dry. And one of the things that I realize is that in particular I don't have as good pictures as the other presenters today. And unfortunately you're not going to get any good pictures here. So we're trying to break it down into understanding what this means if you are just, let's say you're a PhD student who's starting off or let's say you are trying to persuade other people in your group about what you should be doing. So we've kind of broken it down into these six different things that we think are the important takeaways. And there's a large crossover with the work of the Carpentries community in defining what they call good enough principles for scientific computing. So it's not necessarily about being the best. It's not necessarily about following the perfect practice, but it's about working out what you can do to make things that five or 10% better. Because if everyone's doing things 10% better we'll be a much better community than if a few people are doing things 60% better. So yeah, so what I'd like people to share is this idea of making code be readable, reusable and testable. So it's about using code repositories and version control which I know a lot of you are already doing. It's about licensing your software and making that license clear. A lot of it is about documenting for your future self. So writing those notes and comments to ensure that you can understand what you're going to be doing in the future and what you did in the past. Splitting things into smaller, more modular parts that are easier to couple, to make intropable, to share with other people. Using libraries as much as possible for common functionality and ensuring that you are working on common code. And this last one, which I think is possibly the most important for a meeting like this, which is sharing your code with others. And so sharing your code is useful in many different ways. It gives you a way of getting feedback. I loved the point that Georgie made in the opening presentation about being a little bit unsure about MATLAB code, wanting to kind of now look into the Python code that her collaborator has created. Because I think that's a large part of it. We can all feel imposter syndrome around our ability to understand other people's code. I know I do. I basically started up the Software Sustainability Institute because I recognized I was a very average developer of code and there were probably a lot of people like me out there. So sharing your code allows you to get feedback. It's the best way of finding bugs. Getting colleagues to use it is a really valuable way in understanding the code for yourself. Publishing your code is more about sharing it with others. So can you deposit in a repository? Can you cite it in your papers? Can you help get other collaborators through sharing it more widely? And also one last thing. There's an encouragement here I'd give you to contribute back to others. So if you're using someone else's code, as I encourage you to do, work out what it is that you can contribute back. Although you shouldn't necessarily directly expect anything in return. Can you file bug reports? Can you improve the documentation? Can you just give words of encouragement to the person who's developed that code? Because I think that's one thing we're really bad at as researchers is sometimes just saying, actually that was a really good thing you did there. And I really benefited from it. So yeah, let's be kind in science. So there's now lots and lots of different options for publishing code. I'm not going to go into this in detail. I put the slides up so that people can take a look at the details if they want to. Feel free to chat to me in the breaks. I'm always happy to talk about this. But one thing I did want to say is one really good place to share the code is the CSDMS model repository. So it is all of the kind of things I've been talking about. Metadata, a place where you can kind of link different references, a place where you can put in the best citations. That's all in this model repository. So yes, it sounds a little bit like I'm just kind of pandering to the organizers, but I was really impressed to look at the work that's been done in this area. Because I think here, you're ahead of the game. A lot of other disciplines don't have this opportunity to be able to share. They're still working on sharing by email. They're still working very much in silo. So embrace this and make sure that you use it. So yeah, one last thing to finish up on. The other work that we've been doing here and trying to persuade in particular journals, and here we've got really strong support from Shelley Stoll at the American Geosciences Union, is around software citation and making people get more credit, feel like they are incentivized to publish their code and their data. And so we've created two different checklists, one for authors and one for developers to make it much easier to understand how you create citations for code and how you ensure that people cite your code more often. And so I'd encourage you to take a look at this to make it really easy for people to cite your code. And whenever you're writing papers yourself to mention the code that you're using from others and to ensure that they get the credit for that. So thanks very much. I'd like to kind of like just throw it open to questions now, but I will very quickly sort of say this isn't just me. This is a whole huge list of people that we work with across both the SSI and also the wider community. And thank you very much to our funders at UKRI for all the support people. Thank you. How do you encourage the people who, when you ask them to put it on GitHub or something like that, say, well, let me just clean it up a little bit first. Yes, that's a really good observation. And there's two ways that we've looked at doing this. So one is the way that we hope would become the practice going forward, which is encouraging people to be more open and do it from the start. It's much easier if people are already using a version control repository and say using GitHub as a private repository to then just flick the switch and make it public because there's not the big barrier there. The other way, which is the case for people who've already started developing their code is trying to persuade people that it's probably good enough already. And it almost always is. The challenge we have is that there are still some people in the community who maybe don't see it that way. And sometimes it's the best people in the community who kind of pile in a little bit and sort of say, well, you could have done this better and you could have followed this practice better. And I think we've got to make that shift and sort of say any improvement in practice is better than none. And so it's probably up to us as the people who are doing things well to encourage people to say that it's actually good enough to share and that the very act of sharing code is the most important thing you can do, not necessarily it needing to be clean. Because in most cases, I've done other work where we've looked at how often code is updated after it's been published to go publication. So you see the graph and it's quite asymptotic. It's basically almost every GitHub repository that's mentioned in the paper is not updated after the paper has been published. But that's okay because the code is out there. So yeah, good question. Thank you.