 So, well, good morning everyone. I am Elena Perez Tirador, and I am here to introduce decentralized science project and how we are working towards decentralized peer reviewing. This is my first time talking at a conference and also my first time attending one, so well, everything is new to me. But I'm happy to be starting at this one surrounded by people willing to improve science by the means of decentralization. Well, this talk will be structured as follows. First, I will briefly introduce myself and the project. Then I will explain the research and design process we are following. And I will show a mock-up to illustrate how the current iteration design looks like. I will give a quick overview of the architecture we are working on. And finally, I will explain the current position of the project and what we are looking for right now. So the first half of the talk will be centered on the functionality of the application we are building. And then when I am talking about the architecture, I will explain how the decentralization comes to action. So, well, as I said, my name is Elena Perez. I am a recent computer science graduate and a senior mathematics student. And I joined decentralized science four months ago when the project started. And I work here as a blockchain developer. Regarding decentralized science, we are a team based in Spain building a tool to help with the processes in academic publishing. In particular, we are aiming to improve the process of peer reviewing. We aim to do this by increasing transparency and also by enhancing the process of finding reviewers. We, our target, is the growing open access publishing market and we are willing to work in creating an open ecosystem by extending existing tools. This project has received funding from the European Union's Horizon 2020 research and innovation program, in particular within the framework of the ledger project. Which if you are interested, if I am not mistaken, I think it is an open call right now and it is a grant for blockchain based projects. So maybe some of you are interested and you should look it up. Well, first, when the project started for the first months, we wanted to learn about the problem in order to build an actual solution that would help people involved. So the project started with the first research phase in which the first version of the first concept of the product was designed. We followed the directives of design thinking and lean design. So we will be continuously iterating different designs as further interviews with potential clients and pilots and all this kind of stuff is done. So by now we have done several interviews with different actors of the system, such as journals, conferences, associations, universities and reviewers. From these interviews we found out that there are several problems that are shared among all the entities we interviewed. These problems are first, finance suitable reviewers for paper that have the needed knowledge of the field. Second, once these reviewers are found, get them to accept to review the paper. And finally, once they have accepted, there are several possible outcomes. Some of the reviewers deliver on time, some of them do not, and some do not deliver at all. So taking all this into account, we started building our business model canvas to define aspects such as customer segments or a value proposition. So to solve these problems we have found, we define a value proposition composed by three main functionalities. First, a reviewer specialized search. This is when an editor is looking for reviewers to review a certain paper. This functionality will help find which ones are the best regarding both the content of the paper and the reviewers' interests. Then time or ability statistics. This is how many times each reviewer has delivered on time or has delivered late or hasn't accepted to deliver in all these situations. And finally, transparent peer reviewing. This is making review reports publicly accessible. Among other advantages, such as increasing transparency, this can be combined with the first functionality improving it. We are to implement these functionalities. We are integrating with existing platforms, so we reduce the friction for the new users. Well, regarding the peer reviewing and publication process as a whole, we come to action right after an editor has accepted a paper for review. So for this reason, we are not building a standalone application, but a tool that can adapt and integrate to existing systems, such as, let's say, OJS for journals or expert review as a plugin or extension. Now, I will show a mock-up to explain the functionalities we're currently working on. Since this is just the first situation in an InDesign process, this will not likely be how the final application will look like, but it serves as an example of what we're working on right now. First functionality is the search. As I explained before, after an editor has accepted a paper for review, they can search reviewers for a certain topic, also applying filters if so desired. So once the search is done, a list containing the reviewers is displayed. This list contains the name of the reviewers, the reviewing interests, and a list of the published papers and review reports in case they are public. Also, it shows the reliability statistics, which is the second functionality, and this is how many times they have delivered on time or late on all this stuff. And finally, as I just said, peer reviews are shown in case they are public. This is possible thanks to the third functionality, which was transparent peer reviews. Reviewers can make their review reports public if they so desire, thus increasing transparency. Now, I will give a quick overview of the architecture we're currently working on, as well as the architecture we plan to build as the project progresses. So here's a diagram showing our current architecture. We have an existing system, like, let's say, OGS, with its own centralized logic and storage. What we are building is an extension that accesses the system's centralized storage to gather the data. This is done via a GraphQL interface. Once this data is gathered, it is processed by our system, and then the processed data is displayed in the UI extension, which is integrated in the UI of the application itself. But well, this is still a centralized approach, and we're aiming for a wider structure. As shown here, once the base system is working, it will be easy to add the IPFS interfaces to access other decentralized storages, such as IPFS anti-blockchain. This decentralization brings several advantages. First, transparency. If all the information is public, journals and conferences will be easily able to show that their processes are fair and solid. Second, decentralized publication brings open access by design. If all the papers and the information is stored in a public and decentralized storage, it becomes harder to impose barriers to limit the access to the information. And finally, this way, we're also building an ecosystem of journals and conferences sharing information. What I mean by this is that a journal will not only be able to access its own list of known reviewers, but also a list shared among all the journals and conferences that are in our system. So this means more data and more valuable. And finally, since these interfaces we're building will be public, and because, well, the system will be implemented and it's open sourced, and also the storage is decentralized and public, any external client will be able to use all our interfaces to access the underlying system before enabling this being an enabler to building an open, decentralized ecosystem of peer-reviewing applications. So well, now to... Well, I forgot something. Oh, no, sorry. And now to close this talk, I will explain what's our current position and how you can help us. So as part of the agile development methodologies we're following, we're currently doing two pilots, one of them with the editorial unit of the Complutense University of Madrid and the other one with the Iberamia Association, which is the Iber-American Society of Computer Science, in particular for the Journal Intelligent Certificate. So if you are interested in our system and would like to try it, or maybe you know someone who could be interested, please contact us so we can see how we can help you. And also we are still open for interviews with different doctors in the system. So if you are or you know someone running a conference or a journal or an association with journals or conferences, or maybe you're a reviewer or something like this, you can also contact us and we will be happy to talk to you. So with this being said, you can find us in any of these platforms. Thank you very much for your attention. If there's any questions? Are there any questions? Yeah, great talk, by the way. Thank you. It was the first time, I can't believe it. Yeah, it was professional. Very good, yeah. Again, nice talk, thanks. One of the greatest challenges for editors is finding reviewers and building that pool and maintaining that pool and it turns over over time. And reviewers that an editor has used in the past will be busy. They could be reviewing other works and just don't have the capacity. Can you speak a little bit more about how you'll build that pool, how you'll identify these reviewers in ways that may complement or be different from how some of the conventional peer review systems that publishers use today are using? I'd be interested to understand how you're thinking about that. Yeah. In the interviews we have done so far, we discovered that most of the editors have their own lists in a spreadsheet or something like that. There's no specific system for them to store that, as far as I know. So our initial pool will be those like databases, manual databases these editors have with the review as they know. So that will be uploaded to the system and shared among all the journals. And then as far as the reviews are done, the review reports are public and the result of those reviews will also be public. So that will also feed the information we have about the reviewers. And I don't know if that answers. No, that answers thank you. A suggestion for you because there are other services that index the scholarly and scientific literature. And they've built products and systems to provide recommendations to editors to identify potential reviewers and such and to build their personal lists of relied upon reviewers. So I think a suggestion for you is to look into what some of those services, how they approach building that resource and think how you can at least match if not improve upon the recommendations that those systems are able to deliver. Thank you. Thank you very much for your very nice presentation. Very concise and up to the point. But my question is what a little bit troubles me about your project is that the problems that you identified about peer review are more or less familiar to people in academia. And there are already a lot of services and products which are basically the same functionality. I mean peer community and peer review, even pub loans, which essentially is just the, I mean it has a big competitive edge over any new kind of project. And also there's a blockchain-based project which is supported by Digital Science, if I'm not mistaken. It's called Blockchain for Peer Review, exactly, by Catalysis. Last year they had a presentation which met with another critical reception. But still, but they're doing basically this landscape. And my question is would be how do you position yourself in this landscape and how do you find a place for a product in light of the situation where there are many competing projects to improve peer review? Thank you. Yeah, I guess. It was a little what I tried to explain here. Since everything is open sourced and public and decentralized, we will kind of enable the existence of, let's say, a decentralized pub loans. Or yeah, pretty much we are like building the basis for any decentralized application related to peer reviewing to be built. We're trying to facilitate that. And yeah, I think that would be it. I don't really know. Thank you very much. OK.