 For our next session, it's a great pleasure to introduce Kathleen Fitzpatrick, who is the Director of Scholarly Communication for the Modern Language Association and based in New York. She's going to talk to us about peer review and quality, which is something again that's been running through the day. Kathleen has had a great build-up right from the start, of course, with John Claude, in the very first keynote anticipating her talk earlier on. It's been a particular pleasure for me to meet her because some of you will know her book, Planned Obsolescence, which is a great book. It's a great book, not only in terms of, I think anyway, not only in terms of scholarly publishing in the humanities and social sciences about which it has a lot of very interesting things to say, but it's also a great book for me anyway about the future of universities and what they are and what they should be. And it's a book, I think, that actually should be read by university administrators, as well as people who are concerned with publishing, because it returns us to the principle of universities as public benefit organisations that play a critical role in knowledge creation dissemination. So Kathleen's going to talk to us now on the topic of peer review and quality. Thank you very much. Not being six foot two, I'm going to attempt to adjust this. Is that good in the back? Okay. Thank you, Martin, for that introduction. Thank you, John Claude, for the sales pitch, which my publishers and I really appreciate. You can also read it open access online. In any event, I have been asked to talk with you today about peer review and quality. No doubt, because discussions of open access often trigger anxieties about quality, particularly amongst academics who are coming to terms with this notion of open access for the first time. The sense that anyone could publish anything online, we don't know who they are or where they've come from, has often resulted in something of a retrenchment in traditional modes of peer review, which I would argue are not necessarily the most productive mode of thinking about the evaluation of work that is being published digitally for quality. And so this is where my comments will largely be centered, is on this question of other modes of open review, how it might be done differently. Media Commons, which is a digital scholarly network focused on the field of media studies that I'm the co-founder of, and NYU Press received a grant last year from the Andrew W. Mellon Foundation to conduct a study of open peer review practices. And our original goal when we set out to conduct this study was to work toward a set of technical specifications that would allow us to develop a platform on which this kind of open review could be conducted. But in the process of doing the study, we discovered that the challenges that we were actually facing in thinking about open review, particularly within the humanities, were far less technological than they were social in origin. Now this is not something that I ought to be terribly surprised by. This is the underlying argument of a whole lot of what's going on in planned obsolescence, is that the challenges that we're facing with respect to communication in the academy today appear to us to require technical solutions when in fact many of the problems are actually social or institutional in nature. And so require new ways of thinking, new ways of working together, and new ways of understanding ourselves and our work in order to create the change that we seek in the academy. Now while there was a bit more of a surprise for me though, in the recognition of the complexity of the social landscape that we were facing with respect to peer review, different communities of practice make extremely different uses of peer review. They have different desires for its outcomes and they bring different values to its execution. And because of these critical differences, any platform that we might build to support open review would have to be extremely customizable and therefore extremely complex both to support and to maintain. Let me back up a bit. The Media Commons NYU Press Open Review Study was conducted by a stellar advisory group composed of established scholars from a range of disciplines in the humanities in the U.S. with many different investments in peer review from outright advocacy for these new open modes to deep skepticism. The members of the group were Cheryl Ball, who was then Associate Professor of New Media Studies at Illinois State and is now in the process of moving to West Virginia University. Dan Cohen, who was then Associate Professor of History and Director of the Roy Rosenzweig Center for History and New Media at George Mason and is now the Director of the Digital Public Library of America. Cathy Davidson, the Ruth F. Davarney, Professor of English at Duke. Lisa Gittleman, Professor of Media and English at NYU. Nick Mirzoff, Professor of Media Culture and Communication at NYU. And Sydney Smith, Professor of English and Women's Studies at Michigan and now the Director of the Humanities Institute there. The meetings were further facilitated by the grant leads who were me, my Media Commons co-founder, Abby Santo. Eric Zinner, who is the Editor-in-Chief of NYU Press and Monica McCormack, who is the NYU Digital Scholarly Publishing Officer. We began our conversations in this study intending to focus on the following issues. The merits and pitfalls associated with open review, the desirability of open review for certain types of communities and works, the criteria and parameters needed to organize and conduct successful open reviews, the technological requirements for meeting open review criteria, and the technologies that were currently available that can help meet those requirements and criteria. But we also started by asking a number of contextualizing questions, which led our discussions in ways that we didn't always expect. The most central of these questions was what is peer review? I know this seems like a very simple question with a very simple answer, right? Peer review is the review of scholarship and other forms of scholarly activity by one's peers. Peer review plays a foundational role in the determination of scholarly authority and it's relied upon in all of our major forms of assessment. Yet many scholars across many fields are today raising questions about peer review, right, about the purposes that it serves, about the degree to which those purposes, particularly with respect to new forms of digital scholarly communication, are actually being served as well as they might be and how well is peer review working for us today. Peer review is meant to accomplish a number of different things. For instance, it provides a means of critical feedback for scholars in the process of developing their work, and it provides a means of selection among the work of many scholars for quality. Now, at times, peer review is meant to serve one or the other of those purposes, but most often it's meant to serve both, right? Peer review is in this sense meant to represent and further the best of scholarly values as they should be espoused, right? Working rigorously to improve work, to determine the best work, and particularly in the case of double-blind peer review, to do so in the absence of biases based in rank, gender, class, race, institutional affiliation, and so forth. However, a fair bit of criticism has been levied at the existing peer review system, including concerns about the degree to which anonymous reviewers are granted power without responsibility, and the potential failures and some very recent, quite public failures of reviewer and inter-reviewer reliability. Moreover, some scholars have begun exploring the ways that the notion of the peer is defined in these processes, asking whether there might not be a better way online. Rather than limiting the category of the peer to credentialed scholars and even further to scholars credentialed in a specific field or subfield, which is a narrow and usually vertical community organization in which junior members must prove their worth to those who precede them, resulting in a tendency towards self-replication, might we begin to understand the notion of the peer as one that's more horizontally organized, one that's based in affinity and, more importantly, in participation in community processes. This is not to suggest that in the age of open networks, a peer is becoming just anyone, but rather to indicate that the status of peer might not predate participation in review processes. Instead, a scholar might have the potential to become a peer through the quality of their participation. As Peter Frishow has noted, in this mode, peers can be selected on the basis of experience and trustworthiness, not credentials. Such a change in our understanding of the peer points to the need to rethink our peer review practices, particularly with respect to scholarship that originates or is published online. So we began this study, really focused on the term peer-to-peer review, intending to explore review practices and tools that would enable direct communication among a network of existing peers and publications. But this exploration of the shifting notion of the peer itself let us to think more about the ways that opening review practices to new kinds of peers might further some crucial values and goals, particularly in humanities-based scholarship. We aspire in the humanities to engage our students, our colleagues, and a range of broader publics in exploring aspects of our complex histories and cultures. Perhaps the crucial change in our engagements with one another lies in introducing new forms of openness. But what is it that we mean when we talk about open peer review and what do we hope it will accomplish? Scholars already conduct much of their work in public. We present work at conferences such as this, we discuss it in workshops, we share it with our colleagues, and so on. Typically our publication review processes have operated off stage behind the scenes. But in an era in which increasing numbers of scholars are sharing their work openly with the world, whether via their blogs or via other kinds of sort of less mediated publication structures, new open publishing practices are really challenging us to explore the possibility that these open practices present for our field. We recognize that there are many different understandings of the open that can apply in the scholarly context. Questions were raised about whether everything needed to be fully open to everyone, or whether there were degrees of openness that might be useful to different communities of practice at different moments in time. Perhaps a frank discussion among a defined cluster of scholars would be particularly important at certain moments, while a discussion that was open to broader publics would be crucial at others. Perhaps we might imagine a review process that's open to volunteer participants, while nonetheless being conducted in private. Processes like these might require reviews to appear under their author's real names, or there might be situations in which some degree of anonymity or pseudonymity remains useful to the process. Moreover, these two forms of openness, openness of access to the review process and openness of reviewer identity, are related, but they're not inseparable. So in thinking about the different valences of openness, we looked at a range of existing experiments in the open review of humanity scholarship. The Institute for the Future of the Book worked with Mackenzie Wark back in 2006 to post the draft of his book Gamer Theory online in commentable form. Now, while this experiment was not explicitly part of a peer review process per se, it nonetheless raised substantive feedback, much of it from the gaming community, someone that Harvard University Press would not ordinarily have reached out to, that Wark wound up employing in his revisions. The Institute generalized the platform that they had built for Gamer Theory into what is now comment press, a WordPress plugin that allows a long text to be discussed paragraph by paragraph. Comment press was used in its very early stages by Kathy Davidson and David Theo Goldberg in the process of reviewing and revising their MacArthur report, The Future of Learning Institutions in a Digital Age, which went on to be published by, I was going to say Duke, I think it's actually MIT Press, as well as by Noah Wardrip-Fruin, who was seeking feedback on his manuscript for expressive processing, which was definitely published by MIT Press. Both projects were greatly improved by the process, and comments from the open reviews influenced and were included in the revised final publications. More experiments such as these have been conducted by us at Media Commons Press, including the open review of my own planned obsolescence, as well as the two open review experiments that we conducted in collaboration with Journal Shakespeare Quarterly. All of these texts were at the stage at which they would ordinarily have been submitted for traditional peer review. In fact, my book was sent out for traditional review in addition to being opened for community discussion. If there's time for questions, I would be happy to talk about what we learned from those two different forms. While on the other hand, the Shakespeare Quarterly reviews took place as the central stage of a multi-stage process, starting with some editorial pre-selection and then ending with a final round of editorial board approval. In all of these cases, the locally targeted, threaded commenting facilitated by comment press, along with the underlying social features of word press, resulted in robust discussions that were aimed at helping the authors involved revise their work prior to final print publication. Moreover, the comment press format allowed reviewers and authors not simply to respond to the text, but to respond to one another as well. And the authors have reported on the helpfulness of having that kind of social context within which to understand and interpret the reviewer comments. Jack Doherty and Kristen Narotsky similarly used comment press to facilitate the open review of the essays that were contained in their forthcoming volume, or is it out at this point? Thank you, Shaina. Fourthcoming volume, Writing History in the Digital Age. Using the platform, as they say in their introduction, to help make the normally behind-the-scenes development of the book more transparent. Similarly, Matt Gold used comment press in the review process for the essays and debates in the digital humanities, as did Louisa Stein and Christina Buse for Sherlock and Transmedia fandom. In these two cases, the review process was structured somewhat differently. It was structured around the community of authors who were working together on producing this volume. The essay drafts were open to all of those authors included in the collection for their comment, but the review process was conducted behind-the-scenes otherwise. Stein and Buse also invited two external non-anonymous readers to participate in their process, engaging directly with the community of authors as they discussed the volumes' essays together. Other publications have used other means of opening their review processes. The journal Post Medieval conducted a crowd review using a standard blog format for their special issue entitled Becoming Media. The journal Kyros has long used an extensive multi-tiered editorial review process, which includes several phases of open communication amongst editorial board members and between editors and authors. The site Digital Humanities now uses press-forwards combination of crowd and editorial filtering methods to highlight some of the exciting work that's taking place in the digital humanities around the open web. These highlights are then reviewed for republication in the Journal of Digital Humanities. These are just a few examples of the kinds of experiments that we discussed, but assessing the success of review processes like these presents certain sorts of challenges, which may highlight unspoken assumptions about traditional peer review. We assume, for instance, that a review process has been successful, that reviewers responded to the texts under consideration in a forthright, scrupulous, critical manner, and that authors made use of that criticism in revision. We assume all of this to have happened when good work results from it. The fact that something has been published, we assume, means that this has been successful. In an open review process, we have that same marker. We can tell if the work is good. Probably the review process went pretty well. We also have the history of the process itself available for examination. That availability raises several questions that we've never been able to ask before about traditional review processes. How many comments is enough in an open review process? How many commenters? Are the commenters established or prestigious enough? Is the critical discussion in which those commenters engage sufficiently rigorous? These are the kinds of questions that get asked about open review processes all the time, but we've never really stopped to ask them about traditional peer reviews, simply trusting that the process is working as we expect it to. We believe that these questions of assessment will be addressed in part by projects that are underway such as the open annotation collaboration, which seeks to create technical standards and tools to enable the creation of web annotations that can be shared in multiple contexts. The open research and contributor ID project, or ORCID, which is working to develop a standard for the unique identification of scholarly authors, and Hypothesis, which is working to link open web annotation with reputation management. We believe that these projects together will enable open reviews to be linked to researcher slash reviewer IDs, creating a sense of the context in which review takes place. Similarly, there are a number of projects that are seeking alternative means of accounting for the impact of scholarly research, including the work of the alt metrics groups with projects such as Impact Story. These projects might interact with a range of social reading platforms that are now in development to provide a suite of possibilities for articulating the results of open review. It's exactly that kind of suite of possibilities that the advisory group finally decided we're going to need. A robust set of technologies that permit communities of practice to make crucial decisions about their values and policies, and to find the best tools to support creating the kinds of participatory review process they seek. As a result, our final report leans heavily toward providing a list of issues that communities of practice should consider, rather than giving specific recommendations that they should follow as these groups establish and implement their own open review processes. For instance, we believe that communities of practice should articulate for themselves what the desired goals and outcomes of their review processes should be. How are works selected for evaluation? What is being evaluated in process texts or finished texts, articles, monographs, or born digital projects, and for what purpose is this review being conducted? For development, for selection, to foster conversation, for credentialing, or for some combination of all of those functions? What aspects of the worker to be evaluated, and at what levels, from the sentence level through questions of organization and structure, to basic project design, methodology, and significance for the field? And through what means is this review conducted? Is it commenting? Is it rating? Is it liking? Many of these questions seem obvious, and yet it's only in the prior determination of these standards that review committees, I'm sorry, review communities, can really assess whether those goals have been met. As I discussed earlier, openness in these processes can take several different forms, and these are another set of questions that I think review communities need to consider. Options include public access to and participation in the review process, removing the anonymity amongst authors and reviewers, and establishing a means of greater back-and-forth between authors and reviewers and amongst reviewers. These options require careful consideration within communities of practice about the value of open representation of author and reviewer identities, the value of public participation, and the value of reciprocity in the review process. Extending the kinds of considerations with respect to openness, communities must similarly decide what the ground rules for collegial engagement are, their expectations for civility, reciprocity, and response. Concerns that have been raised about open review often suggest that these processes will result in reviews that are insufficiently critical, or that they'll devolve into the kinds of behavior that we see in online newspaper comment sections. In fact, neither of these things need be true, but creating an atmosphere that's conducive to collegial and yet serious engagement requires careful stewardship. Last couple of slides. One of the largest problems that gets discussed with respect to the traditional peer review process is the labor problem. First, that there is an ever-expanding quantity of peer review to be done, and second, that this work is radically unevenly distributed, with good citizens being called upon again and again by editors desperate to get viable reviews in a timely fashion. In an open review process, the work that is done and not done by reviewers is visible. Even more, the work of review may also become the subject itself of review, as the community can evaluate the participation of its members, not only as authors but also as reviewers. Communities, however, must decide how this review of the reviewers will take place, how its results will be communicated, and what stakes it will have in the life of the community. There are a variety of technologies that can help communities of practice meet these goals, and we go into these technologies in some detail in the full report. But we continue to believe that, again, the most important systems with which review practices engage are less technological than they are social. Perhaps most important among these social engagements for communities of practice, considering open review processes, will be figuring out how to articulate their values for themselves, how their processes will support these values in order that they might further communicate those values and even defend them as necessary to assessment bodies, such as tenure committees, funders, and university administrations. Proponents of open review must find ways to situate their arguments about openness. In relation to broader questions about the processes of scholarly discourse, the potential for public impact that these review processes produce, and the importance of the visibility of the 21st century academic. Now, I've only been able to scratch the surface of this project in this talk, but I believe strongly that our most important conclusion is this. Open review processes have a key role to play in modeling a conversational collaborative discourse that not only hearkens back to the humanity's long investment in critical dialogue as the essential core of intellectual labor, but also models a forward-looking approach to scholarly production in a networked era. Open review presents the possibility not only of getting traditional forms of scholarship into communication with broader audiences, but also of helping validate new kinds of scholarly output online. Making the process of assessment visible in a thoughtful and deliberate manner can only, we believe, help improve both the assessment and the work under evaluation. Thank you. Thank you, Kathleen. We've got time for a quick couple of questions before we move on to our final session for the afternoon. Anybody questions or comments for Kathleen? I'm looking. In the front row here, and then back there. There you go. That's all right. There you go. Thank you for this inspiring talk, and I hope that all my poor editors had listened, but they wouldn't. Anyway, so far the notion among many scholars is that peer review has to create something like a scarce resource, and in a way the contrary is true. There is a journal that's called Atmospherical Chemistry and Physics, and they have a rejection rate that's by now down to 10%, and they mainly claim that this is due to open peer review. What would you, if you had power in your hand, do to persuade people that creating publishing space as a scarce resource through rigorous peer review is not the right road to travel? This is an argument that I spend some time with in the book, and I'm really glad that you brought up Atmospherical Chemistry and Physics, because they're one of the examples that I gesture toward quite a lot. What the editors there have argued is that, in part, their rejection rate has gone down as much as it has, precisely because academics hold back on work that's not ready to be submitted until it is ready to be submitted when they know that the review is going to take place in the open. There's something to be said about understanding the moment of publication in a different place that actually causes us to do better work. That's one thing. The other thing is to say that there was a moment when publishing was a scarce resource, when only so many pages in so many books and journals could be produced each year, but now, in open networked publishing, reimposing an economy of scarcity via peer review on a model that should be as free and open as possible makes very little sense. It's completely counterintuitive to the way the network functions. When that happens, we end up with what we're having now, publications popping up outside of that fence like mushrooms, that we have no idea how to cope with. We don't know how to evaluate them. We don't know how to treat them on our CVs. Is a scholarly blog a serious publication? This debate has been going on for 10 years now. We still don't know where it should be listed on the CV. We need a system that's flexible enough to deal with new kinds of publishing systems and structures without reimposing old models on top of what is a very, very new network of communication. Go ahead. Thank you for a great talk. You kindly said that you might be able to say a little bit more about the different kinds of comments received on your own book from the two processes, and it would be really interesting just to hear a little bit more about that. Absolutely. I'll try to make this as brief as I can. The open review had 45 commenters involved, 295 total comments left throughout the book, many of them responding to one another. Of those 295 comments, some subset is me responding to things and asking for clarification and so forth. Because of the paragraph by paragraph commenting system, those comments tend to drill down on specifics in the volume. They point and they say, here is where this problem is. They tend to be a bit more focused in that regard than do traditional reviews. Far more important than that, I think, is the social context in which all of this happens. I didn't ask reviewers to leave their names, but the vast majority of them chose to do so, or at least chose to use the handle that they use in other scholarly spaces online. I knew who they were, and I knew how they talked about other kinds of things online. I had a context in which to place those reviews, and so I knew when my colleague Natalia Cicera commented on something saying, you really need to tighten this point up. I knew to take that point seriously because it was Natalia. I know what kind of reader she is. I know how she responds to things. The two traditional reviews that I got were both phenomenally good. They were really, really strong, critical, thoughtful reviews. It turns out that one of the two reviewers asked the editor to pass on her identity to me, and the other one, within the first paragraph, I had totally identified who that reviewer was, because it was in a voice that I totally know. Those were not anonymous reviews either, and so I did have social context, but that was only accidental that that happened. I wouldn't otherwise have had that, and I could have been left saying, well, this reviewer thinks that chapter three's argument isn't as strong as it should be, but who is this reviewer? Why should I take that opinion seriously? As it turns out, I did, because I knew who those reviewers were. The other thing, though, is that those reviews tended to be a bit more holistic in nature, in part because you know that those reviewers will have read the entire manuscript and have dipped in and read the chapter that they care about, and because they were specifically asked by the press, questions like, are the chapters in the right order? Does the arc of the argument make logical sense? And so they dealt with those kinds of issues in a way that the online reviews didn't. They didn't have that same holistic perspective. There are also places in the book, in online, in the online commenting where chapter four, the one that's the most technical, and there are a few comments in that chapter. And I have no idea, I have no way of reading that silence. I don't know if that means that everything is fine. I don't know if that means that everything is so embarrassingly bad that nobody wanted to comment on it. So trying to figure that out is something that we still need to go some distance on. Traditional reviews, the reviewers are asked specifically to say, if everything is fine, say, chapter four is totally fine. And so figuring out how to develop an open review system that gets the best of both of those systems into an open network space, I think, is really important. Kathleen, thank you. I think that in a very neatly symmetrical way, Kathleen's talk has taken us right back to the points about the sociology of publishing that Jean-Claude mentioned first of all in his opening talk this morning and demonstrated that very profoundly in her comments on the peer review system. Thank you so much for that. Thank you.