 Let's see. Hey, we're all integrated. Yeah, new presentation software, so I'm glad it worked. Yeah, so I'm going to talk about JOS, which is the Journal for Open Source Software. I've been an associate editor there for around a year. So why does this journal exist? This is a really different journal from others, because we actually review the software. It's not just a review of a paper. In fact, the papers are relatively small. And so one of the prime motivations is that citations are seen as academic currency. A lot of research can be effectively disseminated via software. In fact, software may be the best way to disseminate that research. And what should be cited when you're doing that? Well, some software people just say, well, site our web page or site our user's manual. It's a tech report off somewhere. Maybe there's a paper on archive or something. Maybe there's a methods paper. Often there's not a dedicated software paper, but there might be a methods paper. And many of the authors of the software, at least at one point, maybe 10 years ago, were also co-authors on that methods paper. That methods paper is sort of important for the software. And so people say, well, just cite that methods paper, it's not a real paper. This isn't very good for new contributors for the software. It's not very accurate in terms of what you're actually citing because lots of things have happened since that time of that methods paper. And there's a lot of content in the software that may not have been in that methods paper. So JOS provides peer review for research software. And it's a real journal. There's an indexed. We have the cross ref. There's an editorial board that's growing relatively large now. The review process is open. And we focus on constructive review. I'll talk about what we mean by that in a minute. And the process is very developer-friendly. So it integrates with the review takes place on GitHub. JOS is a partner with the open source initiative. And it's a sponsored project of NumFocus. So the submission process for JOS, presumably, you have a software repository. You've been following some best practices. So you have a license. Maybe you have a code of conduct. You have at least community guidelines for how to contribute. Maybe you have tests. If we're lucky, they're set up with continuous integration. But you have open source software. You write a short markdown paper. So this is the JOS publication. This paper is often going to be a page, two, three pages. It's kind of a description of what the software, what kind of research problems the software solves. But it's not duplicative of your project documentation. And so you submit that short paper to JOS. We'll go through this process. The editor-in-chief helps find a topic editor. You might suggest someone that you think is appropriate. We'll meet some of the editors for Geoscience at the end of the talk. That editor will find some reviewers. Most of our papers have between two and four reviewers. The reviewers will raise comments. So they're going to have a number of points that they're assessing. They will end up filing issues in your repository and or commenting in the review thread. And so you can iterate on that. It's a relatively rapid process. Ultimately, the editor will accept the paper once the reviewers are all satisfied. It is archived and published the same day. So submitting to JOS, your software should be open source. There should be an obvious research application. You should be a major contributor to that software. And the software should make some significant contribution to the available research software in that area. We're really not looking for software that is incomplete or was like a one-off project. And it's sort of a throwaway project. That's not really what JOS is about. JOS is about feature-complete, really serious contributions. Contributions that are likely to be sustained. So all of this should be available in a public repository, an open-issue tracker, and things like that goes with our principles of openness. You'll write a JOS paper. It just includes a little bit of metadata. This goes in the top of your markdown file and using Katie's recent publication as an example here. So this was a land lab module. It actually referenced the land lab repository. So this was something that was developed in the land lab repository. And JOS was able to work with that. So you have some metadata. You have some content, the first paragraph of the content. It uses our markdown style citations. So you have your BibTek file. You can cite just like this. So it's a relatively easy process. It uses Pandoc markdown. So you've got your math, your display equations, including figures, captions, all that kind of stuff that is going to work. These papers are relatively short. So if you have mature software, you've done the legwork, you've been following best practices, it probably takes an hour or two to put together one of these. If you have a lot more work to do, then that may take quite a bit longer. You'll want to address in that paper things like what research problem does the software solve? And compared to other software that might be available, you're trying to help the prospective user choose appropriate software. And so you really want to make a fair comparison. But you're not trying to duplicate anything that is in the software documentation. You're not describing APIs or going through a big tutorial. That's not the point of this paper. So this process is pretty easy. So once you have your paper, you put it in the repository for your software project. And you fill out a very short form that says, where can you find the code? What's the software version? Maybe a suggested editor. And then the review process starts. So the goal of the review process is to improve the quality of the software and not to make accept and reject decisions. So the editors, before we put it out to review, we make some assessment of the software. So there are some desk rejections. But usually that's for software that it's kind of not a serious contribution in a sense. And that can be discussed with the editorial board. But once it goes out for review, our intent is to raise this to a quality that's appropriate to accept. And the review process is open. So there's no blinding. Now personally, I have a problem with asymmetric blinding. So if you have, say, a conference proceedings or whatever, often double blinding is the best practice. The thing is double blinding is really hard to do for software. And as soon as the authors of the software are known to the reviewers, we believe the reviewers should also be known to the authors. So the review process is completely open, takes place on GitHub. We have some here example feedback from an author. A lot of authors find that this review process is really constructive. Reviewers file issues with the project. So an example here, those issues will be linked back to the JOS review. So you can see GitHub does that kind of linking automatically. This discussion continues until it converges. And then you get reviewers saying, hey, this is really good. I think it's happy. I'm happy with it. It's ready for publication. We have prizes for some reviewers. So ultimately, the paper is accepted. So there's a DOI. There's the paper. The paper is, say, a couple pages long. But it includes all the info about the repository and the review issue. And so you can go find all of that information. JOS has been growing. So this is the publications per month. We have about 570 publications so far as of this week. So it has been growing. It's been around for almost exactly three years now. There are four geosciences editors. So I'm one of them. And then the three others, Lindsay Higgy, Kristen Thing, and Leo Yoeda. So Lindsay has been a part of JOS for quite a long time. Kristen joined in the fall. And Leo joined earlier this year. The sustainability model, JOS is completely free to authors and to readers. And we're really committed to keeping our costs low so that nobody needs to be charged for any part of this process. So we get some contributions, say, through NumFocus. But our total costs are on the order of $3 or $4 per paper. At our current rate of publication, this is something that we can sustain off of donations. So to get involved, volunteer to review for JOS. Submit your research software to JOS. And be sure to accurately cite software, whether or not it's in JOS or not. And happy to take questions. Thanks. Are there any questions for Jet? All the way in the back. Yeah, so there is no commitment that when you publish in JOS, you will maintain the software for eternity. However, the version of the software that was reviewed is archived on Zenodo. Zenodo makes something like a 20-year commitment to retain that archive. And so it's as good as any other archive that is available. Actually, you can archive it with FigShare or any other archiver that you prefer. But the software definitely is archived. And so you can certainly recover that version of the software. And at the time of the publication, they had the appropriate process for accepting contributions and whatnot. It's not really enforceable for us to say, and you will continue this for eternity. Yeah, so we need the software to be archived somewhere. And DOI is a good way of doing that. So code.usgs.gov, I suppose, could become a public issuer of DOIs if they wanted to. But it doesn't really matter where it's archived. What it matters is that the repository and the it's kind of developed as a community project somewhere. So there's a way for people to contribute. There's a way for people to browse the software to kind of understand what's going on with the development of the software. Because that's part of research software development. But the repository doesn't have to be on GitHub. It can be anywhere else that you can, as long as you can browse the code, get the code without needing to register for something, as long as there's a way to, say, submit issues or pull requests, that's fine. Does that answer your question? OK, yeah. So reviewers are expected to install the code to go through some tutorials, to run tests, for example. Basically, they're trying to assess whether the JOS paper and the documentation for the project accurately describes the capability of the project. And so we don't tend to expect a detailed code review in the sense of you have this code smell and you should refactor this class into something else. Of course, that's welcome if a reviewer wants to go into that sort of detail. But it's not really scalable, because some projects are, say, less than 1,000 lines of code. And yeah, someone could comment on that. But other projects are huge. They represent 100-person years of development. And it's not reasonable to expect a volunteer reviewer to go through it really fine granularity. But they can assess, does it actually solve the problem that claims to solve? Does it accurately depict how that software fits in relation to other software in the area? And that takes some domain knowledge. But yeah, it's at that level. Yeah, so we can have multiple publications of the same software. But it would be like a new version would have some significant new capability. I would say if you're submitting a new revision more than once a year, that's certainly too frequently for a really rapidly progressing project that has lots of new contributions within a year or two years. That would probably be reasonable. You might have new authors who are involved. Yeah, you might have significant new features. A different way to organize it is to say, we have this major new feature that is kind of, it is a significant enough contribution on its own. And so it's being distributed as some part of this larger package. And so Katie's lethology module sort of fits that depiction. And so that was reviewed not as whatever incremental update to all of LandLab, but as this specific new capability. All right, let's thank Jet for his presentation here.