 All right, so good morning everyone. Thank you for joining us here and welcome to those of you both in the room, but also online or watching this as a subsequent recording as well. And on behalf of the NCAR Fellows Association Professional Development Committee, wanted to welcome you to part two of this conversation of overcoming obstacles in the publication process. And two weeks ago, we had perspectives from early career scientists. And now we will have perspectives from more senior scientists who serve as both editors and reviewers of peer-reviewed manuscripts as well. So this morning that we have doctors Scott Ellis, Olia Wilhelmi and Glenn Romine. And what I'm gonna do is ask them to, from left to right, we'll start with Scott, we'll have them introduce themselves, share their editorial experience and also kick off with the first question of what are some of the quick, the quick spark notes, if you will, or highlights of good and bad things that authors can do in the publication process. And then after that we'll open up to questions both from the audience and also the chat room as well. So please join me in welcoming them. Thank you very much for having us. It's a real honor to be here and talk to you. My name is Scott Ellis, as you know, and I've started in car in the, what was then the Atmospheric Technology Division is now at UOL in 1997. So I've been here quite a while and I'm an associate editor for the Journal of Atmospheric Applied Meteorology and Climateology, excuse me, GMC. And I've been doing that for five or six years now. I can't quite remember how long. So it's my pleasure to be talking to you. And my little spiel is gonna be a little bit more broad. So I was thinking generally about this and it's gonna sound a little bit obvious. My wife always says I have a flare for the obvious. So excuse me if this seems a little bit silly, but you know, we're all scientists and our goal really is doing good science. And the publications that we produce are a part of that process, but they're not the end goal. And so a lot of times I've been talking with people and they really are focused on the number of publications that they're gonna put onto their CV. And I think that's a real danger. I think that the goal should be the science and the good science that you do. And if you do good science, you can publish it. And so there's a real risk at over-emphasizing the number of publications. But going on to say that publishing is a really serious part of our enterprise of doing science. And without peer review publications, our enterprise doesn't exist. And so I encourage all of you to take it very seriously, whether you're a reviewer, eventually an editor or a publisher you're publishing. This is really serious. And when I see publications coming in and you can tell, they're trying to get something through. And it's not serious. It's not in depth. That just, that kind of reduces the value of the publication for all of us. So I was talking with a young group of students a long time ago and they were talking about the LPU, the least publishable unit. And I didn't tell them this, but it really upset me because they were joking around sort of, right? So they were talking about, well, I can get three publications out of this study instead of one. And so this bothered me. And so I was thinking of an example, sorry to go on a little bit. But how many of you have heard of the Madden and Julian Oscillation? MJO, right? Have you read Madden and Julian's paper? This is a huge, in-depth, profound publication. And it's with a data set and they could have gone for the LPUs, right? They could have chased the LPUs and got three publications instead of this one. And none of you would have ever heard of Madden and Julian or the MJO. This is a transformative, serious publication and it impacted the field in a way that it created a new field. There are people who spend their careers studying the MJO and it started with this publication. So I think if you chase the LPUs, you might end up really limiting your impact and limiting your science. So my advice is focus on the science first and then you'll be able to publish it. And that's the goal. And try to avoid the trap of number of publications for my CV. So that's my, and as an editor and a reviewer, you can see when people are just trying to push some publication through. It's very similar to what they've done before. You've got a story to tell with your science, tell it and you will be rewarded for that. So that's my spirit. Yeah, so thank you for inviting me. I'm really honored to be here. So I'm Ola Bo Helmi. I've been at TANCAR for 20 years. I started as an ASP postdoc in 1999. And now I'm project scientist and role and I also lead the GIS program here at TANCAR which stands for Geographic Information Systems. So I've been a reviewer for all those years. I've been an author all those years. And I also served as an editor for Weather, Climate and Society. It's an Amos journal. It's an interdisciplinary journal that combines research from physical and social and behavioral sciences. And so the work that publish Weather, Climate and Society publishes addresses really like a broad set of topics. But the key there is interactions between Weather, Climate and Society. And I was an editor for three years between 2015 and 2018 and it's been really invaluable experience. And just kind of, you know, to start off this conversation, I would say, you know, I've been on the author's shoes and the reviewer's shoes and the editor's shoes. And I would say, you know, the probably first and foremost thing you can do is to treat others with respect. And don't take things personally. Because, you know, the ultimate goal is peer review, as Scott mentioned. It's such a critical part of scientific enterprise and we have peer review process to make sure that the signs that get published is robust, is accurate, and it's, you know, actually worth of the publication. And so, again, sort of it's a golden rule, treat others like you want to be treated. So if you are writing a paper, just think about the people who will be reading this paper. Make their job easy. Connect your research questions to approach the methods, to results, to conclusions. Make sure your conclusions are supported by the evidence you're presenting. And make the reviewer's job as easy as possible. Because good organized papers, you know, are really helpful because if reviewers and editors have to look for things in your paper and really try to connect the dots, that just, you know, raises a lot of questions. So again, you know, like if you can sort of think about like who is going to be reading your paper? Can they connect all the dots? Can they understand what you're trying to do? Especially if you think of publishing and interdisciplinary journals because not every single person is going to be an expert in your field, but they will see connections to the work that they do. And the same thing, you know, this sort of the reviewers, you know, if you're reviewing somebody's paper, think about, you know, would you like to receive those comments if you were on the other side? You know, provide constructive feedback, be respectful. If you disagree to something, it's explained why you disagree. Don't just say, you know, don't publish or publish nice work, great work, bad work. Just explain what it is and how the paper, the ultimate goal of the reviewer process is to improve the manuscript and to help the authors to present their work in the best way possible. And so the good papers and the good reviews make the editor's job so much easier. So that's all, I'll stop here and then I can take questions after. Great. Hi, I'm Glenn. Thanks for the invitation to be here. I'm a product scientist with a joint appointment between the Mesoscale Microscale Muralgy Lab and the Computational Information Sciences Lab. So I've been here about almost 11 years now. So my, I guess, reviewer and editorial experience, I've written papers and I've reviewed for a lot of different journals, mostly as for a monthly weather review and weather forecasting within AMS. I've been an associated editor there for, I guess, eight years for a monthly weather review and five years with weather forecasting. And then I just started it as a regular editor for monthly weather review. So now I get to see the other side of the animal. So it's been an interesting journey, I guess, in reviewing. But yeah, I think there's already been some great points made by Scott and Olya. So I'll just add to it that probably the failure mode that I run into the most with papers is, oftentimes I think when you do science, you come in with this idea of what you would like to write a paper about. But sometimes your results don't support that idea anymore. And people have a hard time letting go and recognizing what they can actually say from the results that they have to work with. And oftentimes they try to just shorn it in and you start off reading the paper and you think it's gonna be about one thing and then you get to the results and they have nothing to do with the original topic or at least what they can support is not there. And that often causes a lot of problems because one either gets the reviewer coming in thinking the paper's about one thing and what you're showing is something completely different and now they may get confused or they're gonna come in and say, well, you don't have supporting results for the statements you're trying to make. Those are bad things in the review process. And then the other thing that could happen is sometimes you end up destroying the framing coming in because you don't really, like you just have a bunch of results and you're kind of throwing them out there and maybe just seeing, maybe the reviewer's will provide you with some structure and they may interpret it in completely different ways and then all of a sudden, they may think that you're trying to say something that you're not and then they'll often say, well, you don't have any results to support this. Well, I wasn't trying to say that because you didn't lead them down the path. So you really have to kind of structure it so that they know exactly what you're trying to argue and defend and so the structure is probably the most important way to make sure that they stay on target and on path. And then usually the other things that'll come up and just end up kind of being nitpicky stuff that will get you lots of revisions perhaps, but those are manageable. But the things that get you rejected usually are just unsupported results or confusion about what it is you're trying to present or there's just not enough there. That's the other thing that usually will get you a rejection is that you just, it's not enough material for that journal. So it's important to recognize when you come in and you want to submit to a particular journal what the expectations are for how much content should be there because different journals have very different expectations of what they expect of paper to be material. How many points you're trying to make some, you can make one point and it's totally fine. Others expect a fairly complete assessment story and so you just kind of have to have a good sense of that before you go in and submit to that so that you kind of know what reviewers are used to seeing. Within each journal there's what are known as associate editors and when you're an editor you're almost always gonna try to have at least one associate editor come in and so that's like a super reviewer. Someone who's probably reviewing 20 to 30 papers a year at least and so they have an expectation coming in when they get this paper of what they're expecting to see in terms of content, quality, things of that nature and if you don't meet that sort of typical standard for that journal they're gonna be the first ones that poo poo you and when an associate editor says that your paper's no good you're pretty much sunk in the review process because the editor is gonna put a lot of weight on that review and I'll stop there. Can you just clarify the structure of an editor associate editor? How that kind of all works in the review process? I could define it for within AMS. I don't know in other journals necessarily how it works but within AMS so there's usually for each journal there'll be a single chief editor and they're usually the primary person who every paper that comes in they do a quick check to see if it sort of meets rough standards of what the journal would expect and they could potentially just reject it right away if there's problems that they see immediately. Otherwise then they'll assign it to an editor and you would hope as the editor that the paper that gets assigned to you is one that you know something about but there's no guarantee of that because it depends on what the loads are of other editors and so you may get a paper that you don't actually know very well but you still have to handle it and then underneath each editor they can at the beginning of each year they can pick a team of people that they want to have help them in review processes so usually you can pick two to three associate editors for instance within a month of other review. So I get to go through and basically say I'd really like to have X, Y and Z people as my associate editors so you can pick people that are in sort of your area that you want to have available to you that you can lean on to do papers. So when you're asked to be an associate editor there's sort of this expectation that if your primary editor who invited you asks you to review a paper that you'll more often than not say yes. And so within monthly weather review that they handle a lot of papers so they have a lot of associate editors that are on the team but generally speaking as an editor when you go to pick reviewers you almost always want to have at least one associate editor that is going to look at the paper because it's someone who's reviews you have a track record of that you can trust. So all reviewers get rated like it or not. When you go through the editor process you get to assign how good you thought the review was that you got from each of the reviewers and that scoring gets saved and it's distributed as a spreadsheet to all the editors. And so when I'm picking my associate editors I get to go through and look at the spreadsheet and I can say, hey Jared, I see Jared has a score of whatever and it's like, oh, he said no to 20 of the papers that he was invited to review last year. And so and he tends to be five days late on getting his review back. Like I don't know if I want to pick Jared to be my associate editor. I'm just picking on Jared because I was sitting in the back because he doesn't want to offer comments, I guess. So that's the kind of scenario you have in terms of setting it up and picking reviewers. There's some metadata that exists to help you make that choice. And so if I pick someone that historically has a good track record of reviewing and they don't like your paper, you're probably in trouble. So along the lines, we have a question from online and the question is kind of in your own personal careers, would you recommend becoming either an editor or an associate editor and how has this impacted your career maybe for better or worse? Maybe, oh yeah, we can start with you. I think it's a really great experience. So I have not been an associate editor but I have been an editor of what Glenn has described. And I think it's really valuable to see kind of the other side of the peer review process. Because you not only really start learning what makes a good paper versus not such a good paper, but it also gives you appreciation for good reviewers and not very helpful reviewers too. And then also help you to become a better reviewer in the process as well. Finding reviewers can be really challenging especially because all of these editors, associate editors, reviewers, it's all volunteer jobs essentially. And so when you ask to do this, so that's just an extra load to whatever else you're doing. And so, and I think that's where you really start kind of seeing and appreciating, things that people put careful consideration, a lot of thought, they prompt on time, they accept to decline your invitation on the time they matter because the review process can get really long if you can't find reviewers. And it just prolongs the review process. And I also wanna just say kind of what Glenn was saying, at least by the climate and society journal, the way I use the associate editors often is sometimes if I had conflicting reviews, somebody loved the paper, somebody hated the paper. And I'm not an expert matter in this kind of particular publications as that's when I also can rely on associate editors who might have expertise on this. So they will be almost kind of like a referee in a way. So if it helps me to figure out how to perceive this decisions. So, but yeah, I think it's definitely a valuable process help you a lot. Just one quick follow up question on that. So when you reach out to an associate editor to essentially be a tiebreaker, do they know that they're a tiebreaker in that instance? Usually, yes. So I don't share reviews with them of others, but I just say, you know, I have this paper, you know, looks like reviewers disagree on how to proceed. And I would like your input on that. I've actually had it the other way too, where I was shared the review that was in question. And more or less asked to review the review, is this correct? And that's difficult. So when you're reviewing, you know, I think what Olia said, be respectful and do a thorough job because this is a small field and editors are prominent people. And you're, it's also a part of your own reputation, like it or not. Can you give an insight on, sometimes it happens, there's a bit of paper and then you get major rutions back. But for the reviews, it was one rejected, one minor. And then the editor decides on like, major rutions for that. And that's just kind of very, they're kind of like, well, what do I do now? Because I can't, like, this is the weird limbo. And I wonder in which situations that happens, or like, what? Yeah. It does happen frequently. The editor that I work with and the papers that I've seen with that, the editor usually gives you some comments and should be able to say, you know, that there was some helpful comments. Why, you know, but in particular, these things seem to be a weakness of the study. And so then the editor is giving you a clue as to what they feel is important. If they don't, I'm not sure if you, you know, when would be the right time to approach the editor and say, because I've had a colleague who jumped through hoops to do all these crazy things that the reviewer was asking for. And the editor didn't think they were necessary. And at the end said, why didn't you just talk to me? So I think it varies with the editor, but usually there's some information there that you can take. If there's not, then it might be worth asking the editor. Yeah, Andy. Oh, yeah. You have mentioned that finding good reviewers are so difficult. Some of my past co-authors suggest me that every time I submit a paper, I have to suggest some reviewers. I was wondering how important do you think it's to do and what are the good strategy, what are the good suggestions for reviewers? Like how to avoid obvious conflict of interest and maybe, yeah. Just to show that editor can see that you respect the process. I think it's definitely helpful to provide suggested reviewers. So at least with a mass journal, we have a reviewer database at our kind of expo, the expose to peak reviewers. But what I find that sometimes it works and sometimes it doesn't because maybe the database hasn't been updated. Maybe some broad area of expertise is listed but the person may not be familiar with a specific method and approach of the papers addressing. So sometimes we start this as database to look for reviewers and sometimes most of the time we just go and think of people who might provide a really good review. Somebody who is knowledgeable about the topic, about the method, about the data that's being described. And oftentimes, if I run out of options, I look at what people cite in their paper and saying, okay, if they're citing the other, somebody else who's working in the field that might approach those people sometimes. It's just Google scholar search and see who else is working in the subject. So I think in terms of conflict of interest, I think that's an interesting point because again, it's such a small field and most people know others who have, they have colleagues who publish or unless it's something really, really obvious, like if you have personal issues with the author and then you have to be really upfront with the editor or saying, I know I will not be able to review the subject to me. So then, in that case, the editor will respect that, but if I can say, well, I worked as this person before, but I feel like I can still do an objective review then that's okay, fine. So because as a vice, if you start eliminating everybody you work, there's a publish, there's a little bit more reviewers left. I like the idea of having some suggestions. I think it's useful that they may or may not get used, but I think it's helpful to give them some idea of who you think knows about your topic area. So I would recommend, at least including a couple, they may or may not get used. So I guess don't have expectations. If it's in a field that I would know something about then I might know like, okay, that person's in this camp. They, I know they have a tendency to self-sign a lot. I'm going to cross pollinate them with this other group. So that may be a purposeful thing and that's probably the hated reviewer that you get that comes back and runs lots of changes because they're from the other camp and they may not like it, but I think that sort of cross-pollination is important and necessary. And as an editor, it would definitely be something that I would try to do if I see a problem with that. So when, at what point in your career would you recommend people consider stepping volunteering to be, say, an associate editor? You should have a few publications yourself. I think that's a good starting point, but yeah, I think, you know, there's always a desperation to have good reviewers and so express interest and then spend a lot of time on that first review so that you get the attention of the editor and say, wow, that was a really good review. I'm going to ask you again. And then once you've proven yourself to be valuable for that person who handles papers in your area, then they're likely to want to add you as an associate editor in their next round. Kind of related to Jared's question, would you caution early career, some post-doc, early career scientist from volunteering just because of the time management considerations? Perhaps they have publications already and they do have expertise in an area. I think, oh, you made a great point about that before and that's just that by being a reviewer and seeing how other people write papers and what is a good paper and a bad paper, you yourself are going to be a lot better at writing papers and certainly my first few papers were terrible. So becoming a reviewer helped me look at more in a much more critical eye than just reviewing them to find content. So I think you'll get a lot of experience from being a reviewer. And so if writing still seems mysterious to you, then do it. If you just got it nailed, yeah, it's probably not your most efficient use of time. That'd be my opinion. Yeah, no, I would definitely recommend being a reviewer but maybe waiting a little bit to become like an associate editor because that would be a really large volume of reviews. And one other thing to consider too is be a reviewer for proposals and be on a proposal committee or ad hoc proposal reviewer because you really see what makes a good proposal and doesn't limit yourself to just publications. For those who maybe have interest in reviewing proposals or papers, how do you go about, do you email an editor and say, hey, can I review papers or how does that work? I've never had to advertise that I need to be sent a review. I think when you publish a paper and you get your name out there and get some references that you'll show up but if you're not getting any of them and you would like to then I would encourage you to ask. Exactly, another way to do that, let's say if you have a colleague who is maybe more senior, you can express your interest in reviewing papers. And so let's say, if I get a request to review a paper and I'm really busy, then I could send email back to the editor and say, right now I have too many commitments but I think Ian's gonna be a great reviewer for that paper. Please book him. And that's another thing is that it's really important not to delegate reviews because I think that's something people think and sometimes think like, well, I can't review it but I'm gonna ask my, you know, whatever colleague to review it and then I'm gonna submit it. Never do that. So because you're being asked to review a paper because the editor perceives you as an expert in the field and with that review is gonna be associated with you and so if you can't review a paper, just suggest other reviewers and that's actually another really important question. If you decline and sometimes you will decline reviews, it's really helpful to suggest all of them because that's also good. And provide their email address if you can. Anything that could make it, it was chocolate. Jenny's really good. Jenny who? But then for a proposal reviewing process do you have to have submitted proposals before to be considered as a reviewer or? Yeah. Okay. Okay. I'm not sure. So you sure have to be, so once you're funded by an agency, then you're in their pool. I see. And then they'll hit you hard with requests but they, I think it would be unlikely to get asked to do a lot. Sometimes you're as a hip car employee you get asked to review NSF proposals just as a generic NSF-ish expert but even if you don't have an NSF proposal I've seen that happen. So to that question Sally, I'll just share a personal experience. I emailed a program officer and expressed my interest in reviewing proposals and I actually was asked to serve on one not long after that. So that for those interested could be an option too. It's probably, it's certainly not the most common way but it could be out there. So the downside, I'll put it out, sorry. Okay. The downside to doing that ad hoc reviews for like NSF and stuff is that you'll submit your review but you'll never see anyone else's. And so I think what's helpful also is to see how other people view the same thing and that's the part that is harder to see unless you get involved in say, panels. So when you go to do panel reviews those are super valuable because you're gonna go and see how everybody else sees the same thing that you're trying to look at and you'll get a lot more information about what resonates and does not resonate. Sorry, do you have a question? Yeah, I just, I guess I was wondering what's going on there. So I submitted my first paper during my master's and then I got hit with a bunch of like requests to review other people. And I was, of course, we overwhelmed that no idea how to do that and like what to even accept and what's even, yeah. And so I declined a bunch and probably not in the most thoughtful way because I didn't know back then. And so now I haven't received requests in a long time and I would like to actually change it again because I think now I'm at a point but I think I can actually give good review rather than just the early grad student kind of level review where you're like, yeah. About those things. So now at that would be a good, step right now to take to actually approach a paper and be like, okay, I've learned a bunch and now I would be happy to review. Is that how to approach that? Is it the same editor or the ones when they asked you before or were they were a little in place? I think they were from various journals. Yeah, and you could reach out to the editor again and say, I wasn't available to them but I am available now in case you don't paper them, it's in line. And the more you publish, the more... And you can talk to some of your senior people you work with, that was a great point for Amelia. They can direct some reviews you like. I mean, the editor always makes the choice but I think if, oh yeah, it says you're gonna be a good reviewer, it's likely you'll see that. So I want to circle back to a content question that I think Scott and Glenn, you hit on earlier and it's this question of when do you have enough and going for this one significant impact paper versus splitting it into three? And so to all three of you, do you have any thoughts or guidance on that content versus, as you put it, at least publishable in it? That's a tough one because it varies a lot from one person to another, from one study to another and from one topic to another. But I think my advice is to develop your own sense of what story do you want to tell with this publication and this study and what impact, I mean, not every publication you need to do you're gonna do is gonna be the impact of Man and Julian but that was a bit of an extreme example but can I get three lightweight papers out of this? Can I kind of cram them through or do I do a little bit bigger paper and maybe a little bit more of a complete story to tell? And it's really your judgment call and my advice was not to say anything specific but just to focus on the science and the problem at hand and does your publication address that in some reasonable way? Even if you could do something quicker, maybe this is more what you're interested in and even if it's not Man and Julian, 19, whatever it was. Does that help at all? I don't think it did. I would be curious to hear other thoughts. And I think that some of it also depends on the journals because a lot of journals have specific work counts and so if you think that a particular journal has the audience that you're trying to reach and you could see what types of papers they publish for example, some journals may be really appropriate to publish a method so you might wanna publish a method in one journal and then maybe you write another paper or you cite that method and talk about the framing and the results and the findings from your study because I think the danger, I mean, I agree that it needs to be a complete story and not like have sort of like artificial breaks but sometimes the danger is where people also try to make papers really, really dense. And there's so much in them. Yeah, you need to have a focus point. Right, and so like the method is really complex and then there's modeling experiments and results and theory that's part of it and so then sometimes that can detract from the story if you say very little about each piece of your research. So in that case, I think it might make more sense to actually have a separate paper where you will go a little bit in depth about methodology and more in depth about the results and how those connected. So what I would say it really depends on case by case basis and the journals because some of the journals accept long papers, you know, more than 10,000 words but other journals, when something really is distinct and then you might be better off to sort of like here's a point and that's what you need to know. Right, it's an interesting balance. Yeah, it's a balance, yeah. Yeah, I like that. You have kind of partitioning it out to where you maybe would do different parts of it. I think what's less popular is, you know, part one, part two, part three, the papers. Reviewers hate those because they get sent all three parts and they have 90 days to do the review on them and it's an incredible workload that is very difficult by anyone to volunteer for. And often it's just not necessary. So I think it requires effort, but yeah, I mean you can always ask for help in terms of how to partition it and break it apart if you have just too much content. But I think oftentimes that it really is some sort of key stories in there and then some peripheral stories. And sometimes you just have to sacrifice the peripheral stories as not being the big thing and let them go. I definitely don't wanna see every experiment that you ran where there's 15 lines that are all laying on top of each other with no meaningful difference about them. None of them tell an interesting story but you ran the simulation and you wanna show me. Like I don't wanna see that. Show me the pieces that are interesting and different and focus on where the biggest story is and kind of let the rest of it go away. So that part I think is sustainable and don't build an acronym that's got 50,000 characters in it that no one can remember what it is because you've got so many experiments that you need just ginormous acronym. That's also a real difficulty. The sort of bare minimum to get it in is I guess the other side of it. It's like, what is it enough? And honestly I don't think that there's a clear answer to that because it depends on how good you are at crafting. I've seen people that have basically no real results but they write a very elegant story about this tiny bit of results that they have and it still gets published. So it kinda depends on who you are and what you can get away with I guess some people seem to get away with almost nothing and it still gets published. So I don't think there's a bare minimum that is an obvious requirement of what it takes to get through. Sorry, question again? Yeah, this is kind of building on the question of what Olia said about the danger is making papers really dense. And I was wondering, I mean many of us I think are in the stage of our postdoc where we're really becoming independent and we're really leading our own papers without mentors weighing in on every sentence. And so do you have any advice for checking that, because the whole story's in your head, right? It's just a question of whether it's on the paper. Do you have any thoughts about how to make, make sure the whole story is there and your sentences aren't just so dense? For me, it goes back to what I think both of them said. You know, when you have the experience of being a reviewer, you can take a look at your paper and try to put on that reviewer hat again and see if then you can start to see. And it's really hard, we still fail at this sometimes, but you see the holes in the story that you're telling. And so really keeping in mind the reviewer as you're writing it and how is it organized and is there a block of text with 16 different points and no subheaders, that's another thing that my PhD advisor told me is nobody's gonna read your paper from the start to the finish except for the reviewers. Everybody's gonna go and try to look and find what you've got. So if you've got these huge blocks of text and things are hidden in there, it's not gonna be very easy to use. So just I think keeping in mind the, because we all read papers, but the vantage point of the reader and the reviewer while you're writing it and almost most more importantly or at least as importantly, whether you're organizing seems like a really good strategy. And of course, when I get the internal reviews back or whatever, I'm like, oh my gosh, I failed to do that. But so you try to do it as best you can. So one way to maybe make the paper itself a little bit less dense is the use of supplemental materials and so some journals allow for that. And so the paper needs to tell a story, right? So it's kind of like, what is and have something new to say. And if you feel like you have more to add to the story, but it may not be sort of like your main characters in the story, then they can go into a supplemental material. And so if somebody might just want to read your paper and they can be satisfied with that, but somebody who might want a little bit more detail about the method or maybe say some additional experiments, then they can refer to supplemental material. But I don't know if all journals allow that or not, but I know some of them is journals usually. You can put it in appendix. Appendix, yeah, yeah. Not, but again, only if it's necessary. Not just like I have all this extra, I'm going to put it in supplemental material, but if you feel like it really adds, but maybe not that main focus, then make use of that. Use simple language. Don't try to get out the pizaras and figure out the fanciest word that you can possibly use or the most colorful word, because it just invites problems. Blossoming and exploding. It just, it gets people down the wrong track of what you're trying to say. Yeah, and I'll throw out one other thing also. If you're just not very good at writing, feel free to ask for help from somebody who you think is better at writing for grammar and things like that. It's okay if you're just not an expert and you need some help. Don't hesitate to ask peers for help with grammar to try to make it easier to read. It's a real obstacle for a reviewer if they're having a hard time understanding what it is that you're trying to say. If English isn't your first language then there's professional services if you don't want to ask a friend. But if you put it in and there's problems along those regards, it almost always makes your review process a lot more painful because the person gets hung up on all the grammatical things along the way. And the language, using words as they mean, not. Yeah, because if you're not sure of the meaning and it sounds more colorful, I'm gonna throw it in. And it could mean something completely different when the other person raises like, I don't think you really want to say that. And then it just, yeah, you'll just stand up with the 15-page review. I have a follow-up question on that. So as a reviewer, am I technically responsible for correcting language or not because then there's all the tapes that are in the end, right? So that should really take care of everything. My approach is if it's pretty lightweight, I'll put the corrections in. Comment number 12, add the period or whatever. If it's too much, then it's too much and I'll make it a major comment. The grammar is not, the English writing is not okay. And that needs to be fixed throughout. But I will not go through and fix a hundred different grammatical problems that I see. I actually tried to do that once because it was a friend's paper. I vowed never to do that again. I was like 50 to seven comments or something. I think that happened in one of, when I was a reviewer that I then made a major comment that the language needs to improve. And then in the next revision, it wasn't really improved. And then we just give that same comment again until they do something about it. Or I really didn't know my role there either as a reviewer. Well, you just part of your role to point out if it's just unacceptably bad to the point that it's a hindrance or a reading paper. It may be, you may think it's obvious, but I think you should say it and call it out. And in some ways that's actually supposed to be checked before it gets to you, but oftentimes it's not. It's okay to call it out. And it's a reason for rejection if someone just doesn't fix it. I'd say on that note, AMS journals do tend out a higher quality of writing and proof reading. It does appear. Whereas there are several other journals where it doesn't look like either the typesetting editor or anyone at the journal even read the thing because there are so many typos I get through in the final published product. And so at least for me, when I review, yeah, I'm catching everything. But if it's like, I'm generally happy with the paper, but there are a few things I call, I might list out the little detail things because I want it to be a good paper. But if it's a huge thing I'm gonna reject it, then I might just give examples of here are some of the types of things that I see. Here's one example. There are more in the rest of the paper because I think it's at least, I think it would be helpful. If I got a paper back and someone said, the English needs to be fixed, I would at least want to have some clue of what to look for. So I try to put myself in the shoes of I want to, I want to, I do want to help the author here, the author team. So I want to at least give them some pointers of where to go, of what to look for to fix. All right, so if the journal has high standards like that, then I think in some ways that it is the responsibility between the reviewers and the editors and eventually the types that are to make sure that it meets that standard. I don't know what happens if it goes through that. You don't often see it, as you mentioned, where it makes it through the entire process and these things haven't been addressed at some level. So whose responsibility is it for? I don't think it's really well defined. But I'll do the same. I usually, I'll do a lot of those edits of grammar, even though I don't think it's necessarily my job as the reviewer, you're technically hired to provide scientific expertise. So as the editor, do you read through the entire paper before you assign it to a viewer? So, no, okay. I just, does the editor even know in which state the paper is so that I, like, if it already came to me past the editor, does that mean he already kind of okayed the language or? I had a few papers. So I don't believe it worked to work, but I do scan it. Yeah, okay. So, and I, like, for example, for Letter Climate and Society, you know, they have been a couple of, like, the three years I've been an editor, I think there may be been two papers where I had to write back to the chief editor and say I don't think that belongs in this journal because it has no social science or it doesn't have any human dimensions. It's basically a whole bunch of anthropological, you know, experiments that somebody claims is good for societies that doesn't belong. So if I was just transferred to a Muscle Weather Review and then the editor from Muscle Weather Review says, no, it doesn't, you know, it's true because science is deep enough, right? So then you kind of get into this, like what are we gonna do with this paper? And sometimes it's a rejection at that point. So, but I think sometimes if it's like within the AMS, sometimes the editors from different journals might talk to each other and say, would you take that paper or would you, you know, so yeah, so you look at the papers. And another thing that you could do as an editor and I think some journals are more, have that practice more than others. So if I see the paper, it means the general guidelines of the journal and I read it, like I scan it and I just think it's a horrible paper. I can, you know, I have two options. I can send it through a review but then each of my reviewers are probably going to be mad at me for sending them a really bad paper instead of me rejecting it outright. So, so I think some journals, you know, the editor can also just, you know, look at it and say, there's fake applause, you know, there's no merit, you know, reject. So instead of getting, you know, waiting three months and having three reviewers say to check, to check, to check, and then the reviewers might not review for you in the future because you sent them a really bad paper. So I think in a lot of, probably in most cases, you know, suggestions from reviewers or co-authors that are maybe grammatical or something like this are a sign of like another problem and then making that comment can help you think about how to reframe your paper or something like this. But what happens when maybe it's just a suggestion from a co-author or a reviewer do you have kind of a way of dealing with it or it's purely stylistic? Like, do you have a way of saying, no, I don't want to change my paper to be passive throughout the entire thing or something like this? Doing so in a polite way. I've always kind of struggled with, okay, but this is how I write, so. Yeah, I mean, just because someone asks you to do it or tells you you should do it, you can decline and then it, you know, a reviewer may feel very strongly about it or they may just be like, oh, there's just a suggestion, I don't really care. They feel really, really strongly about it and they complain again that it falls on the editor to decide, you know, who wins. Some editors are very afraid of pissing off their reviewers and don't want to go against them and so they'll tend to go along and keep my reviewers happy because otherwise they're not going to review for me next time if I go against them. But others are happy to just step in and say, well, you know, I'm calling a truce, this person wins and we'll go down that route. And so that is ultimately what the editor is supposed to do. Same thing if you get like rejection to minor revisions. I mean, it's the editor's job to figure out like, okay, is this rejection overly cruel? Is the, you know, except with minor revisions like they didn't actually understand it, didn't know enough about the topic or maybe they had some relationship with the person and so they just couldn't review it critically. That's the role of the editor to figure that stuff out. So I want to follow up though on Olivia's question because there's kind of the stylistic things but then there's also the science piece and while facts are facts. One question I'm curious is what happens when you have a reviewer or editor, perhaps there's a clarity issue and maybe you can address there but what if there is truly just a disagreement in kind of the analysis that was run, the conclusions that are made, how should an author kind of navigate that level of a disagreement and maybe related what are you looking for in an author's kind of response to reviewers where they do want to make a rebuttal and kind of justify their position. Maybe Scott or all of that. I'm thinking, that's a good question. Yeah, I think it's, the first step is what Olivia said, try to not take it personally and try to, when I get reviews back, I usually read them and then I put them aside and then I come back to them and when I finish a review, I never send it off right away and I sleep on it and I reread the whole thing and usually end up softening the language or trying to be more helpful in the language. That being said, I think if you really have a disagreement in the analysis and you can justify your position, you should be able to justify your position. Then I would say do that in a respectful way. Even if the reviewer hasn't been respectful enough, this is a good point, this is what they believe in and as a reviewer I realized too that the authors have worked very hard and try to be respectful and say, okay, I think what you did is whatever and but what about these other things? So yeah, if you really feel like you're justified and you can, usually when you get a harsh comment, it does make you reevaluate the, are they right, do they have a point? And if they don't have a point, then you have a justification for your work and you can say, I'm not going to make that change because of this, this and this and you give the reasoning in very clear language in your response. And then when said it's up to the editor, the editor agrees with the reviewer, then maybe you have to make that change or re-submit it to another server. Yeah, that's also a great time to call for help from a suicide editor too, so that's when you could use an extra opinion or even invite another reviewer, like if you don't know the subject, you sound like that's an editor. But one thing I also should note that even if you get rejected, you can still appeal. So you can write to the editor and if you completely disagree, I mean, if it's not just sort of like taking a stand and saying this is the best science ever, but if you think there was some comments that you feel like maybe reviewers didn't quite understand your approach or they feel like they didn't provide enough justification for why the paper is rejected, you can appeal and say, you can provide, but if you use your appeal, you also have to provide justification why you're appealing to the process. So I've seen that happening a couple of times and sometimes the editor and chief editor, and at that point the chief editor has to weigh in as well. But I've seen that a couple of times when the paper was finally revised, but then everybody has to say, you know, if you want this paper to be published, you have to address reviewers' comments or something somewhere else. Ian, another thing I was gonna mention, I had that happen to me once. So if your paper gets rejected from one journal, usually the reviewers still have to provide a comment on how to improve the manuscript. And I would highly recommend before submitting it to another journal addressing the reviewers' comment. So one time I had a paper that was submitted to by the Climate and Society and it was usually if the paper was rejected by another journal from AMS, then the review process gets, you know, open, we know what happened there, but if it's from another journal, sometimes, you know, we don't know. So I sent the paper for review and I got response from reviewers saying, I already reviewed this paper for another journal. In none of my comments, I still addressed. So you already, yeah, again, because it's such a small world. And so then we go against to the agony of sending the author, you know, so the reviewer basically said, here's my additional comments from another journal. So you could do leave it. And so that was a very uncomfortable situation. So definitely, you know, consider it because again, you might get it there. Same reviewer, even when it's another journal. When a person say it at postdoctoral level, it has feedback from a journal and they don't, let's say it's negative or more negative than they want and they need support in figuring out what to do. So that's, I think in my own experience, I didn't ask for enough help from people. You know, what do I do? What can I do? And then it got dropped or I didn't publish it or not. And who do you have ideas of who can people go to? Scientists, young scientists go to for advice. Who do you suggest? Like, you know, it could be, I guess a wide variety of ideas if you have them, but just to help fertilize the imagination. Do you have co-authors? Co-authors should, Good, Chen. But yeah, I mean, you can literally appear in a network. In some ways, I mean, you are the expert. You wrote this paper. You generated the science. You should be able to respond to the questions. And they may not be to the satisfaction of the reviewer, but you know, it is what it is. So I would say stand your ground and defend yourself if you don't, if you don't wanna do it or you don't think it's the right change to make. Defend it, you know, argue for your position. There's nothing wrong with that. And I've seen lots of people that give reviews and maybe they only do, you know, half of what they're asked to do because someone's like, well, I'm not running more experiments. I'm not, you know, I can't go back and recreate X, Y, and Z. You just say, well, you know, that's a great idea, but it's beyond scope. Or it's, I don't have time or, you know, computational time to be able to redo experiments, whatever, just say, well, you know, I can't do that and defend Y and then just see what happens. I mean, they may come back and say, oh no, you really gotta do it. And then you go through another round. But for that first round for sure, just stand your ground if you really think that what they're telling you isn't going to make your paper better. Because that's what these comments are supposed to be. They're supposed to take your science and make it more valuable to the community. And if what they're offering you isn't going to make that science better, don't do it. Don't just do it because they think you should. There should be a reason. If they hadn't justified why you should do it, you should justify why you shouldn't do it. In the recent experience I had, I wrote a paper and it was an observation. So I wasn't running models, you know, with a computer time, but it was a technique and the author thought if I did this, it would make it better. And I immediately thought, well, I don't think so, but I know I can try it. I have some demonstration code and so I tried it and it didn't really pan out. And so that was very powerful. That was like his major revision. And I said, well, that's a great idea, but I tried it out and it turns out for these and these reasons, it either doesn't make any difference or it doesn't really help and that went through. So, you know, in that way I really, I was fortunate because I really had a strong justification. I tried it and it didn't work. Okay. But so I think if you come up with a justification and it's pretty strong, like Glenn said, you're the expert, you should have pretty good luck most of the time. I was just gonna add to that. So sometimes also, if you unable to, you know, collect more data or do more modeling experiments that you're suggest, sometimes they are asking for that because your conclusions, the evidence that you provide and the conclusions that you draw from that evidence don't match. And so what you may have to do in that case is to rethink a little bit how you're presenting your results. So for example, if you ran some limited number of experiments and you draw this really, you know, channel conclusions and that if you are myself, I can prove it, right? So then you might, you know, sort of reframe your paper and make it a little bit more focused and instead of sort of, you know, making this big statement results or supporting evidence and you could say, you know, this is what this paper is about and I'm confident in this results and this is what they tell me instead of kind of like being this really big and broad this other sort of system, this substantial evidence. So there's a number of things. That's the most common reason that I've seen for rejection or even major revisions is that the conclusions are too grandiose and for the limited data that are used. Can I ask a follow question about the additional experiment? Where do you put the new results? If it's negative or doesn't support your conclusion, do you put it into the main manuscript or you put it in supporting information or just put it in the respond to reviewer? Yeah, our software reviewers is, yeah, that's the right place for it. So if you don't want it in your paper that you want to demonstrate that you did consider what the reviewer asked you to look at, that's the place to put it. And yeah, it go along with what Scott said. Definitely, if you have the data and it's trivial to demonstrate something that someone thinks is a great idea, go ahead and provide it with them. And who knows, they may have a great idea that really helps make your paper better. You can be like, oh, and I liked it. And now it's figure 14. Yeah. Great trivial. Yeah. But yeah, I mean, I think it's always, we usually have like 60 days to respond to reviews. And so you generally have time to think about it and assume that what they gave you was probably positive and it may seem like it's just negative and they're being a jerk. Maybe they're working in the same field and they're trying to get their paper out before your paper gets out. They may have bad intentions and why they're trying to delay your results from being published. But I think that's hopefully in the rare in occurrence, but yeah, you can get just a jerk reviewer. Do you want some? And yeah, and sometimes you just have to kind of, yeah, figure out how to deal with that and hope that the editor will jump in. There's one experience I had where there was a pretty major, it was a different technique that we developed and it was pretty major revision. And it was kind of along the lines of something that I could have thrown into the future work. And I had thought about doing what the reviewer was suggesting. He was even suggesting it on a technique that was already published. So he was sort of questioning, not this paper, but the previous paper. And I really thought to myself, you know what, he's right, he or she. And I paused and I actually ended up being very late with my revisions to the point where I had to resubmit it. But in the end, it really wasn't much better paper and a much better technique, it was more robust. And I could have just decided, well, I'm gonna throw that into the future work and then do that, but would we have gotten to that or not, I don't know. But so I sort of backed up and I said, okay, this reviewer is making a really good point. And it's a pretty major effort, but I think it's worth doing. And I ended up with a better method. It's always a personal choice and maybe that's not always possible, but that was one of my experiences. So you mentioned in the beginning, Scott, how it's kind of obvious as an editor or reviewer, if people keep submitting the same topic or they haven't really done something major new, it's just a little bit new. So how do you deal with papers where, or I was involved in, I think two papers where, we started doing this fairly normal thing for atmospheric sciences technique that had been used in other fields, but not here specifically. And so we started looking into this and got kind of a lengthy and unfortunate review process, but eventually we got it out. And now a year later, we've learned a lot more and like now there's a new paper, which is still similar because a lot of the setup is very similar, but we have many important small results on the individual applications that are different enough that this is important to show, like no, this actually changes because we did that wrong first time or like eventually you have to... So you just justified a second publication to me and so it's not that you have to publish something completely different every time. So my colleague was reviewing the paper and did a Google search and ended up seeing like the entire, mostly the entire paper already published in another journal. And so that's more what I'm talking about is there's really very little new. That's not to say you shouldn't build on your previous techniques or even correct your previous paper. So I think that's a perfectly legitimate science and that's how we go forward. And so how much increment is enough is again a kind of difficult question to take. We started with Morris Wiseman. You probably have, so you made a career out of basically running a cloud model with a single sounding. And so I think permutations on an experiment, if it's a solid experiment foundation, you can make a total career out of it. Absolutely. So don't feel limited. So in your perspective, what are the key points that tends to be keep in mind just before accepting or declining a review process? So you've been requested to perform a review and you're wondering whether... Yeah, if I get a paper to review, so what are the factors I need to consider before accepting or declining that review? Either it may be personal or any other factors? I think the first thing is if you're, feel you're qualified to review the material. Do I, is it aligned enough that I can provide it at the useful? And then, as you go on, maybe you're reviewing six other papers at that time and you're really not gonna be able to provide a timely one. So I think, can I do it subject-wise? Can I do it? Can I actually get it done? That's another one. And that's legitimate. I mean, you don't wanna just always reject papers, reviews, because you're busy with other things. But if you're reviewing a proposal and do other papers, that's pretty heavy, six other papers, whatever. So you wanted to decide that and then do I have a conflict of interest? Is it my PhD advisor's paper or is it something like that? Which doesn't happen very often, but sometimes. Anything else I can. Sounds good. Okay, good. Let me just add one thing. So, but like Olya's journal, a lot of papers are multidisciplinary now. And so as a radar meteorologist, I might get a tropical meteorology paper to review because it has this radar analysis part. And I'm not really gonna be able to evaluate the novelty at an expert level of the tropical meteorology or some very detailed thing. And so I know that my role is to, well, can I understand what they're doing? I mean, I'm at least smart enough to do that. And then really focus on the radar part. And I usually make that clear in my review. I'm a radar person and so I'm sort of giving a very general overview and I'm relying on other reviewers to review that part of the paper that I don't really know as much about. Yeah, and normally when you submit your review, there is an option to send message to editor that the authors do not see. And that's where you can also be a little bit more, right, very respectful review to the author, but to the editor, you can be a little bit more honest and blanched. Like, or that's where you also can say, I only reviewed the methodology and I have no idea in terms of the background, whether people know the field or... I was not able to write a equation for attention or something. Yeah, or if the site is all the right literature, so you can say, you know, I don't know if they represent the fields, you know, sort of adequately how much it contributes and your knowledge to the field, but the method is fine. Jared. Yeah, so my question to all three of you is for the various journals that I'm with you serve as associate editors or editors, to what extent are either negative or even neutral results accepted or encouraged because there can be really important negative results or neutral results that can basically serve as a warning post to everyone else in the field. Okay, you don't need to, not everyone needs to waste time going down this track, but we don't see very many of those papers. And so, I guess, how do you, if someone has a negative or a neutral result that they think is really important to get into the literature, how would you recommend, and I don't have anything in mind personally, but like, how would you recommend to someone to make the case that I really believe this is an important paper, even though my method didn't work, but yeah, how would you handle that? I don't think many show up that have that kind of situation. I think more often when you see a neutral result, the argument is that it's not a neutral result. Okay. And those are the ones that project. We don't recommend that. Those ones definitely get rejected when you try to sell something that just isn't supported. But I have seen a few papers that come through and it's like, well, you know, our results are somewhat mixed, but there are some indications that it might matter if you had a bigger data set, whatever, you can kind of argue ways that it potentially could still be important, but that you couldn't statistically get, you know, confidence intervals that would confirm what you thought would be the result and you don't have the ability to get more data. Those are still publishable. So I published one that was basically a neutral result. It showed some aspects of positive signs and I would show those and then I would show that, and then there was this event that completely broke the paradigm. So if you're open and honest and just say, yeah, I had some that agreed and some that didn't, I think it still has the potential to be published, but I agree, there's not many of them that you see and I don't know if it's because they all get rejected in a review process where people just don't submit them out of fear that they won't go through. So would you recommend in that case, like I guess probably the submitting author would need to kind of make a case in the cover letter, like, hey, this is what I'm doing. This is, you know, why I think it's important to maybe kind of try to make a pre-bottle. The balance or? I doubt it, it's a big difference. I mean, they'll read it. I guess if you're really neutrally, you're wishy-washy in your abstract, that might get them concerned, but they're probably not gonna know the details of how strong or weak your arguments are. But I think as long as you just, I mean, be straightforward and when you put it out to the reviewers, just, you know, have some bolded lists of the strengths and weaknesses. Just, you know, be open and see how it goes, yeah. And that also I think depends on the journal because I think, you know, some journals may be more kind of receptive to papers. Let's say if it's a hypothesis-driven paper and it's sort of a hypothesis that maybe community-wide accepts, but you could prove through your work that it doesn't really hold true, then I think it's a valuable contribution. And so I think the way you frame it, any kind of paper with a contribution to science doesn't gonna advance your field in one way or the other. If it does, then I would say go for it if it, you know, kind of like, yeah, it's interesting, but maybe not that important, then maybe not for us to administer the process. Yeah, and be very clear about the point of it from the very start, so that somebody is not reading a paper and then halfway through, they discover why you're telling them that this isn't gonna work. Say, like, from the first three sentences, this paper is showing these things and then the reader has that expectation. But I think that papers like that might be more important than in the community is currently valuing. Suspense and misdirection doesn't work well. No, in technical writing. Not at all. I was told that as a master's student, one of someone on my master's committee, someone who's fairly prominent, but I won't name, basically said to me, don't keep the reader in suspense because I've learned a different way of writing in college, more like creative writing, essays, that sort of thing, but scientific writing is a different beast. Transferrable skills, but it's different. I think just to expand on your excellent point is even in papers where you have positive results, it's really important to the advancement of the science to be really honest about the limitations of these techniques and what the challenges are and okay, you can say this is the typical result and show the best thing you ever got, but show the warts too, and I think you'll have a much stronger publication because in the end, if people use your publication as a basis for their research, they're gonna find those warts anyway. So you might as well have shown them to them. My master's advisor was huge on that. It really bothered him when it was obvious that the study showed two Rosie a picture, and there's no reason for that. Let's do Josh then Olivia, and that's probably the question. Josh. Hi, I was just wondering, to follow up a previous question about conflict of interest when deciding to review an article or not, I think there's like different types of conflict of interest. Maybe your PhD advisor, you don't review that paper, but if it's a great personal trend, for example, because the field is relatively small, how do you go about managing that? And I guess what's your opinions on blind reviews, like not knowing who the authors are? And if you're asked to review it, and you don't think you can give an impartial review, then you should decline. I guess be my hope, because the editor may not know that the two of you are out of friendship or whatever. And so if it is a friend or a coworker, maybe the person in the office next door to you, if you're working here at NCAR, that you get asked to review paper for, it's happened to me. And you have to decide, like, can I do it? If you can do it, because you're right, good reviewers are necessary. And if you really understand the science, and you can provide a good review, your input is necessary. So consider it. I think we've all reviewed friends' papers. Sometimes anonymously, and sometimes not. But yeah, I think Glenn makes a great point, you have to decide whether or not you can be objected. Blind review, that's a big can of worms. Right at the end. Yeah, there's been experiments with a few journals at various places where they've tried to do that. And then there's been the exact opposite where everything is open. So there's some open access journals that were even that reviewers aren't blind. So they have to reveal who they are and their exchange process of the paper is completely transparent. And they should publish your review along with the article. So that's a new experience. Is that through the end, a journal that we may know, like, age of you or something, or is it, I guess what journals are doing that? Publishes your response for the review, but not your name. There's a couple out there, I think. There's an electronic journal, Sphere Storms, that also is completely open. They include all the reviews. They only do major revisions. They don't do minor revisions, but all the major revisions and the responses are all made part of the final paper. And I think some ways that, you know, not being able to hide as a reviewer that definitely helps soften the review because, yeah, if you're a complete jerk, it's gonna be evident to everybody when it gets published. So that does maybe get reviewers to be a little more reasonable. But is that a good thing? I don't know, yeah. In some ways, I'm not sure whether it ultimately makes a better product or not, and I don't know how you would tell. And then that other problem can be where just some people are maybe just not very, they may have preconceptions about someone because maybe it's, you know, big fish A over here and well, they wrote a paper, so it must be great. And it may have serious fundamental flaws and people may have preconceptions about all that, you know, that person's important. So this must be a great paper, but it may not be a great paper. And I don't think they should get some sort of advantage. They should have go through the same criteria whether it's good or bad science, or maybe that you're, you know, a student and I think you can go in either way. So, you know, it could be that someone is sympathetic to the fact that this may be your first ever paper and they may be nice and wanna try to help you get published, so you can get started down the road of doing this type of work. Or they may like, I'm gonna pick on them, they're a student, they don't know what they're doing, I'm gonna put them into place and say, hey, you wanna be published here, you gotta do, you know, 90% more work than you've shown here. I don't wanna see a summary of your master's thesis. So I don't know, I think it's all over the board in terms of what you're gonna get for a review. Yeah, I've gotten reviews all across the board. I've had papers rejected, is anybody else? I've had papers rejected, it's a good time. And I've rejected papers and also done, I've never done a review where I just said accept. It's never happened and I almost never get minor revisions, honestly. Because if you do minor revisions, at least within AMS journals, it means you don't care in whether they make the changes or not. And almost always there's something about it that you're just curious about and you may wanna ask a question and say, what did you think about this? And you just wanna see the response and if you say minor, you may not see it. So usually you'll wanna say major just to get someone to talk back, but there's like major, major and like not really major, okay? There's a lot of variables there. I'll just add to that, as a reviewer, I work pretty hard to get, if it's not a great paper or I have, it's not like, there's some that you just rejected immediately, not very often, but I work pretty hard to get to major because I may not have all the answers as a reviewer. These people work really hard and if there's a flaw in it, I might not see how they can solve that in 60 days or 90 days, but maybe they can. And so I work, I don't, I'm pretty hesitant to try to reject a paper because they may have an idea that I haven't thought of that will work and will make that publication a good contribution. Can I clarify, you work hard to get from reject major? Yeah, I work hard not to reject papers, yeah. I mean, unless it's really fundamentally flawed, like the statistical analysis in one that comes to mind was just wrong. And so that had to be rejected. Yeah, and it's a hard decision, I think as an editor because some papers, it's very clear that there's just not much there, it's sloppy, just people didn't put much effort into it. So those are easier to reject. So it's the paper, proof reading your papers. But if the paper, if you see, like, yes, it looks like it's a lot of work. But then I think there's sort of the straight off, right? Because you could reject the paper and just basically say, I think this work needs to be published but a lot of work needs to be done before it gets published. So instead of giving two months to fix it, take some time, reconsider, think about it. Soft reject, yeah. And maybe, and still you could just submit but you have more time. Because if you, a reviewer or editor, and if you say major revisions, but you kind of, that verge of rejection in major revisions and you think they may not address your comments, that paper's gonna come back to you. And you have to review it again. And so, and then if they didn't address the comments, adequately then what you did, then you reject it at that point and you do another round of major revisions and I've seen both. And then you can either kind of enter this really lengthy review process where every time authors get two months to address comments and then you ask your viewers again to take a look at it. And so I think it may be even like more kinder option to just reject. You rejected it, yeah. That's actually a really good point. So it might save people all the time. I've done the software check. So, yeah, but I feel like when I started as an editor, I was leaning towards more major revision because I'm like, I want to give people a chance. And then I went towards the end and like I was much more comfortable rejecting. That's the thing, that's a really good point. Because they don't come back to me. That was quick. That was quick right now. All right, well, before we thank our panel, I did want to certainly thank all of you for joining us both in person and online. And I wanted to announce that in two weeks time on February 27th, at the same time, we'll have another professional development workshop, this one on the ABCs of breaking bias and increasing inclusivity as well. I think for that one we'll be in center green, but stay tuned for an announcement there. But certainly please join me in thanking our panel for joining us this morning and have a great rest of your day. Thank you.