 the authors pick up the right research questions, right? So that's the very first thing. And there are like five different aspects to this thing that needs to be understood. First, I really need to have a clear understanding of what is the core research question that's getting addressed, right? And that should be very clear to me right in the beginning, right? I should not, you know, that should not be fuzzy or hazy, so to say. The second thing, the second question that I would be asking when I'm reviewing is are the research questions compatible with the agenda of the event, or the agenda of the journal, you know, or the agenda of the special issue, whatever that is. So every journal or every conference proceeding, every conference would have their own scope. So is it compatible? Is it aligning with the scope, right, of the venue? That's the second question that needs to be understood. Then comes the third question, which is that are the research questions still relevant to the, you know, to the community to which this particular scholarly communication is going to be done, which means the people who are actually going to read this research and then develop further ideas on top of it, right? So are the research questions still relevant to the community? Are the community, will the community really be interested in reading or understanding or dwelling on this, on the research questions? Then question number four, which is really, really important for, at least for me at a personal level, are the research questions intrinsically difficult to address? Because if the research question is not intrinsically difficult to address, then probably it doesn't have that much of, you know, it's not really a research problem, so to say, and probably that's not something that might attract a good amount of attention from the community, right? So that's question number four. And then the final question is, how many researchers are up to the task? Like how many researchers will actually be researching on those research questions? You know, in terms of the size of the community. Now, don't get me wrong, you know, in certain disciplines, actually, a smaller number is good, you know, especially, you know, if you look at humanities and certain kinds of social science problems, you know, it's actually the uniqueness of the research question that draws more attraction, that draws more eyeballs, right? So to say, while in other kinds of disciplines, if you look at engineering, computer science, and these sort of mathematical, applied mathematical sciences, you'll see that, you know, the bigger the size of the community, the better it is, you know, so it can vary, but this question is still important to ask, depending upon which discipline you are in, the size of the community who would be interested once the people is published. So that's so far so to wait when it comes to step zero, which is just trying to understand the research question from these five different perspectives. Then the first step, which is checking the consistency, the validity and the positioning of the approach or the way you are addressing the research question, right? So you need to validate the content constructively. First of all, the word constructively is very important because at the end of the day, you are reviewing, and your responsibility as a reviewer is to give constructive criticism so that the researchers can actually improve upon that work and submit at a later time in a much better way, right? So that ultimately the community gains from that research, right, so essentially it's a community service at the end of the day, we are all doing voluntary reviewing, primarily. So check for the inconsistencies and the argumentation, you need to check for the validity of the citations used as supportive evidence because a lot of times what happens is that somebody might be using a citation as an evidence to support a particular claim. So we need to check whether that is really true or to what extent that is true, whether that has been misrepresented or that has been misunderstood by the authors, you know, a lot of times it may not be intentional as well, but that has been half understood, you know, and has, you know, not the full truth has been revealed or something like that, quite enough, but I think, advertently, it might happen so, but we need, as responsible reviewers, we need to understand that part, right? The third thing is to check the validity of the originality of the claims, like if somebody is, there are multiple claims that can span across the entire manuscript, and we need to, at every point of time, we need to first quickly identify those claims, that's number one, and then try to understand to what extent the claims are original, you know, whether there are other words out there that sort of resembles the claims, you know, whether other people have already claimed these things or, you know, to what extent, you know, and then to use those references as a part of our review and to make the authors aware about it because there's also our responsibility as reviewers to make the authors aware about all of those works which they should be looking into, right? And finally, check if the author, if the work has been positioned comprehensively against all the other previous relevant research works, right? So that's pretty much how it goes when it comes to consistency, validity checking and positioning checking, right? So that's step number one. Then step number two is going to be a check for the methodological rigor, which means whatever approach you are taking, you know, in order to address the research question, in order to establish your claim, all the different steps that you have taken, all the different methods that you have taken, qualitative, quantitative, whatever that is, all the arguments that you are making, you know, to justify your narrative, to justify your claim, to what extent you are providing all the required supporting evidences, right? So essentially what it would mean is that you need to check whether all the arguments are backed by authentic and authoritative facts. So not just authentic, but it needs to be authoritative also. So both the things, right? So A, whether, which would imply that whether there are missing citations and so on and so forth, right? Or if the citation is the game, you know, goes back to the previous discussion that whether if the citations are, if somebody cited somebody, if the author cited somebody, then whether the citations are really authentic or an authoritative as well, right? Then the second thing is that, is the right kind of sampling in, you know, if it is an experimental science, you know, or whether if it is some sort of the results that you are trying to provide and establish the claim by providing those results, you know, if that has gone through a particular kind of experiment, then whether the sampling has been done in the right way, whether there is any bias in the experimental setup itself, right? And sometimes you might also want to look at whether there is the right kind of time regularity has happened and this happens a lot. These are important points to understand when you are working in biomedical domain or medical domain and so on and so forth or clinical trials and so on and so forth. So are all the experiments conducted with an appropriate controlled environment? And this is also true even for a lot of social science experiments as well, right? So you need to make sure that, you know, all these things are all okay. Is the result or the finding analysis that you are doing, you know, or the method that you are taking in order to get those results out, whether they are acceptable in the community and whether they are comprehensive enough, right? So which means that are the results incomplete or, you know, which means that you are leaving out certain kind of results and if you leave out those results, then what happens is that you won't be able to support your claim that well, which means the claims that the authors are making in the paper, or do they have the right kind of support or are they suffering from incomplete support, right? So that's what it means that the minimum amount of results that are required to support that claim, whether that's there or not in the paper, right? Then do the results satisfy the acceptable position or the acceptable, you know, accuracy or, you know, depending upon, of course, depending upon the discipline that you are in. So sometimes the results, accuracy has to be understood properly by using the right kind of measures, you know, which would make us understand that whether those results can be acceptable or not, right? And then finally, are the figures and tables all there, you know, are they truly supporting the claim or not, right? And also at the same time, are the authors in the, you know, whoever has written the manuscript, have they properly explained the tables and the figures, have they used the tables and the figures or interpreted the tables and the figures in the right way or not, and are they supporting their claim in the right way by using those figures and tables, right? So these are the kind of thing that would sort of help you to figure out to what extent the approach is sound, to what extent the approach is rigorous, right? So that's sort of the thing that we need to check in step two. By the way, keep putting your questions in between and let me know if I'm going quick so that I can get back to those questions after I finish. So now all of this is sort of, as you can imagine, and it's not easy task, like, you know, starting from your step zero to your step one, to your step two. So you need to be paying a lot of attention. You also need to remember a lot of all the stuff that you have already read, all the papers that you're reading at a detailed level, not just at an upper level, at a surface level, but you really need to, if you really want to do good justice to all these steps, then you have to make sure that you have everything on your tips. So the big thing, the big challenge is how do you keep up with the state of the art literature, right? And even if you are keeping up with the state of the art literature, how do you make sure that you remember all the details of what you are consuming almost every day, right? So the first thing, usually what we do, how we do, let's look at the status quo. So we, for researching, we are collecting and reading about, we keep doing that. That's also part of our own research, right? Not just about the review. We do a lot of literature review on the current research as far as the topic and what we can do while we are reviewing a particular people. We will look at the research topic or the research problem and be true. We will try to look at probably Google Scholar or PubMed and try to go through all those different types of databases to figure out if there's something similar in terms of evaluating the novelty or the originality of the study, right? And periodically, we are also going to review certain core channels in the field to keep us updated, right? So that's pretty much standard practice that we all follow. And many times, especially in many different places, even with PubMed, Google Scholar, many of the search databases, you can actually keep alerts like article alerts for topics relevant to probably the management that's under review. So that's one way of doing it. And probably that's the kind of way in which many of us are actually working on it out currently. There are other, you know, more specific ways of how you can actually, you know, improve your search strategies for locating articles, like, you know, you can do keyword-specific search to find important peoples for that might help you to evaluate and validate the claims. You can use a lot of different types of filters and limits to optimize the search results. You can select specifically those sources that are most relevant to the manuscript under review. One approach can be that you just stick to the kind of journals that are very closely associated with the journal or the conference where this paper has been submitted. You know, which means where you are reviewing. And you can, you then have to like summarize and you have to critique and you have to compare all these sources in order to access the manuscript with integrity, right? So there's a lot of work and a lot of cognitive load. Let's put it that way. So with millions of PODP papers published annually worldwide, you know, this task is to say the list is quite challenging, but we still sort of in some way managed to do it. But let me show you, you know, the stuff is essentially that, you know, so much to validate and impossible to remember all the details. That's the major problem. Even if you can find the peoples, let's say in our own way, but the biggest challenge is how do we remember all the details out there, right? And how do we connect it with the people that we are currently revealing, right? So that's one thing. The second thing is that there are so many checklists. You know, step number zero, step number one, step number two, there are so many checklists out there is a huge cognitive load, right? And how can we do all of that when we have our own research and we just don't have that much time, right? We have not just the research, we have students supervision, perhaps if we have professors or faculty, or if you are into corporate research, then we have many, many other responsibilities that comes together, you know, even, you know, there's other kind of academic services like committee services and so on and so forth. So there's so many, so much going on in life, right? So if you really have to do, and if we are given like three, four, five papers to review, and if you have to do all the steps, zero, step one and step two in details, that's quite challenging, right? So this is our attempt to figure out that whether we can build something for you that can actually help you in delivering high quality reviews in very less time, you know, by reducing the time. So it's kind of like paradoxical, right? But then that's what is the holy grail, that's what is going, right, with review assistant. So how to do it? I mean, before, I just want to show you the webpage where you can actually get the, so this is a raster.io and you can just go to review assistant and this is the whole page and for all of you, you get a one year free access of usage, you know, if you're an early adopter. So just get the early access and with the code of PRW21, you can get an image of it today only. You don't have to go and get into a wait list. So this is the webpage and once you enter into the thing, this is how it looks like. So this is where you start the review process. So it all starts with a new review. You can create a review. There will be a button, something like this, new review. And once you've made the new review, it will ask you for certain details. So what we'll do, you first need to type the, you know, type of reviews. You will first need to add the journal title, the description and certain tags. You can give certain tags. So in type of review, you can click and it will ask you whether it's a journal, whether it's a conference, whether it's a clinical trial or whether it is some other kind of technical manuscript that you need to review, right? Once you do that, then you can put some descriptions and the tag depending upon what you chose will come automatically over here. Then in the second step, depending upon what you chose, if you chose a journal, then it will ask you what's the volume, what's the issue, the year and the submission portal, which means from where actually you've got those, that can be manuscript central scholar one, that can be other kind of author one, that can be other, all of these different places where you might be getting those manuscripts to review. If it is a conference, there's a lot of other easy chair and all of those, you know, portals where you would be getting those papers to review, right? So that's the link, you can put that. There's an interesting thing over here, you can actually set the review deadline and it behaves like a reminder for you and will keep on, you know, you can actually put a reminder for yourself or put a deadline for yourself for this thing or if it's a conference and it will ask for the venue, the year and again, all of these things for other kind of stuff, so it will just ask the year and the submission portal, right? So, and then you create the new review and once you create the new review, what will happen is you will see this sort of stuff coming up. So you see all your reviews in one place. So as this is just examples, so you can have such sort of reviews like this and that this is by the way, this is a conference that you imagine and this is a journal and so all your journals together with the deadlines and, you know, the deadline will turn red and there will be an email notification once you are like three days prior to that deadline and so on and so forth. So whatever deadline you set. So by default, the deadlines will be like 90 days. So three months, because in any case, you have to submit in practically all journals, you need to submit your review within three months. So by default, every review will have a deadline of 90 days. After that, it anyways goes really only. So, and you can edit the details, you know, from these three buttons, sort of that's pretty standard you are used to Google products then you would be using such things. Once you're inside a product, I mean, inside a review, you need to add the submission or the manuscript, right? And you do that by clicking on the add submission and then you need to upload the period. Like, so which means from wherever you are getting the manuscripts to be reviewed, you need to download it and then upload it over here, right? So that's kind of like the alpha version for you. We are trying to see how we can actually, you know, partner with all these different publishers and journals so that we can make it a seamless one button, big sort of things which immediately imports all your submissions in this space. But let's see how it goes. For now, you can upload the video from your computer, right? Imagine that I have these four papers to review. So these reviews, the submissions of these manuscripts are going to come over here once I upload them. And I can also see, you know, the decisions over here, what kind of decisions I have made as a reviewer and what are the decisions that are still pending, right? I can put time or specific time to specific submissions as well saying that, okay, I need to get this review done by next week, Tuesday or something like that. I can do that as also over here. And so this is kind of like my list of manuscripts. So that's the submissions. And then if I click on any one of these peoples is going to open the review room, which is the place where I will be actually doing my review. So what I do is this, when I click on any one of these papers, I would end, this paper will open up into like a reader mode. And this is how it will look like. And you can of course navigate from here. That's pretty standard. If you, again, you're using stuff like Google Drive and this is pretty standard. And then you have on the right hand panel over here, you have all this information, the tabs and the references, figures, tables, submission scorecard. And this will reflect if it is a conference, then it will be something like access or online reject. If it is journal, then it would be like revision, major revision, minor revision and so on. We'll get to that. Now, so these are the different things, the title, your review, the type, the abstract, you can look at the abstract or video and this is the submission score that allows you for a much better organization of your decisions. And you can go back to your decisions and change your decisions as well. Great. So now comes the interesting part. So the first thing that you probably want to do is to validate maybe of the comprensiveness of the citations of this paper, right? So this paper might have cited a lot of different things so you need to figure out whether there are other very important papers out there that should have been cited, you know, to what extent the citations of this paper is comprehensive, right? That may be the starting point. So what you can do is over here, there's something called Explore Related. And when you click on it, you'll see this banner. And you can look for research papers in terms of related research, similar research questions, survey papers which are related to this problem statement. And there are some other complementary resources that might help you to quickly figure out what a particular concept is. If you are not aware of that particular concept or if the concept is very not very popular, you know, a technical concept that's not very popular and you want to quickly try to understand what that is. So you can actually look up the media and lectures or video lectures on this specific topic or even, you know, technical articles from top notch places, you know, like MIT Tech Review or other kind of tech report and blogs and so on and so forth like Scientific American and all of that, Nature and all of that, right? So, and then, you know, if you have, if we can help you in partnering with your university, then you can also have the university or resources available for you which might include books, e-books, thesis and maybe other kind of research papers that are connected to your university library which are subscribed by your university library. So this can also come provided we, you know, we help you partner, with your help we can partner with your university. So now, once you click on, let's say related research or similar research question, you will have all these sort of results coming up which are specific to what you click and then what happens is that you will find something called actual literature. So it means that imagine that you are, you want to figure out whether this is the people that should have been cited by the authors. Then you can add it to literature and it gets added to some specific place and gets also attached with this manuscript. Then you can later on go back to the people and read the people and figure out whether certain things should have been taken into consideration by the authors of the manuscript. So you can easily find figure out what are all the different things going on. You can filter it in terms of year and all of that. So I've not shown you the different types of filters that you can do, but you can filter in terms of years, year, interval, shorter. You can also, a lot of times you can see other kind of metadata as well like impact factor and so on and so forth and citation comes and so on and so forth, right? And once you click on after literature, then there will be a section to be here or related literature just under submissions where all your related references which you might want to use in your final review as you are addressing the authors will be all organized over here, right? So that's a good way in which you can sort of like validate and later on you can actually look into them and I will show you something called summary. You can use the summary to actually very quickly skim through them, skim through these papers because a lot of times we don't really have the time to read all of these papers because we are, that's the deal, like we are actually facing the time crunch. So we can actually look into the summary of each papers in which we are collecting and then we can quickly write a comment when we are writing the review of the manuscript, right? So another way to understand, another thing to understand is the references that are out there that actually the authors have used to what extent they are good references or whether they have been used in the right manner or not. If they have been used as supporting evidences then whether that's to what extent that usage is correct. So what you can do again, you can go click on references over here and all the references will come up again just similar, very, very similar to the results cards and you can then add full literature and again it goes and gets added to the literature, right? If you don't have enough time to read all of these people that you're adding on in order to validate because as I said, then you can use summarized. So by clicking on summarized over here in the related literature, so all the people that are getting collected, you can then click on summarized and it will give you a section-wise summary. I will show you that which can very, very quickly help you in figuring out whether the references have been used in the right manner or not or whether there are other references that should have been used based on authors missed, right? So this is sort of the way this is the place where everything keeps getting collected. But at the same time when you do add to literature over here, it also gets attached with that particular manuscript also. So it gets added over here broadly, but it also gets added attached with that particular manuscript like satellites, so to say. So when you get back to the manuscript again and you can click on attached and you can see all of those papers which you added to literature related to that particular specific manuscript, right? So now comes though, you know, how to quickly focus on the key aspects of our paper. So there are two ways of doing it. One, as I said, was in the review room over here, there are certain tasks. So where you have the summary of the key insights and then you have the review which ultimately will be reviewing the paper. So inside the summary now, this is something which is very interesting. It is very different than a normal standard summary that you would find. So what we did was we actually summarized the different sections of the paper. We did not skip the sections of the paper. Why? Because a lot of times you might want to look at a specific aspect of the paper and then quickly figure out what that, what is the key aspect over there? What's the key contribution, perhaps, of the claim or what actually the authors are trying to establish in that particular section, right? So it's a section-wise summary. That's kind of the interesting thing we'll be hearing. Very, very quickly scan or click on the outline in a particular section and then the section will come up, pop up and you can just read that particular section, right? Or there's another interesting thing. You can actually access the specific arguments. That's called key insights. So if you click on the inside, what happens is instead of the summary, it coalesced all the different narrative arguments like let's say research goal, claimed novelty, speculations, assumptions, comparisons and all of that into different buckets, so to say. And all the key claims or statements made by the authors are all collated together within that argument. And you can specifically go to that section and you can quickly figure out, you know, like what's the research goal or what's the research context or what's the possible originality and so on and so forth, right? What's the supportive arguments and so on and so forth, right? So that's a very, very quick way of skimming through the people. There are two different ways of skimming through the people very quickly to figure things out or to actually focus to specific aspects of the people if you want to do that of the manuscript. So now comes the, you know, the validating the key aspects. So you've got the, you know, so this is very quick way of finding the key aspects or focusing to the narrowing down to the key aspects and then how do you validate the key aspects? So one way is that you can actually select less than possible originality and you want to validate this statement. To what extent this is original? You know, are there really other materials out there that actually closely related with the narrative or the argument in this particular statement? So what you can do is you can select this particular statement and this pop-up, this context menu is going to come up. And then what you can do is you can click on research papers. You can click on this. And then when you click on this related research papers, all the papers related to this particular selection are going to pop up over here. By the way, just to help you understand, we have currently indexed more than 117 million research paper abstracts and somewhere around 50, 60 million open-access full-text research papers. So there's plenty of stuff that we are indexing every day and maybe it's from 2,000 to 4,000 research papers that we keep indexing every day, right? And we have partnerships with many different data partners including Crossref and Unplayable and all. So these are the things that we are, that's kind of like the source that we are using to get you all these results, right? So once you find the results and if you again want to figure out and whether that people really talks about this thing or to what extent it is closed and so on and so forth, so that you can make a very, very nice detailed comment about this statement, then you can staple it. Now, people ask pretty much like a sticky note. So what happens when you staple it? It gets tapered with that particular statement and then you can click on this particular statement and you will be able to see that reference which is tapered with this particular statement. Now, this is something which helps you to organize again your reference articles in a much, much better way so that you don't forget. You can again come back maybe even a week later or two weeks later and actually write a very nice review because you haven't forgotten the context within which you discovered all these references, right? So that's the beautiful thing about it. Finally, here you will be writing the review and you can actually do the, write the review in your own way or you can write the review according to the template that's prescribed by the particular journal or by the particular conference and you can copy paste. I can paste that template over here and start writing your review and then finally copy it all and then it all gets copied in the clipboard and then you can paste it wherever you want to submit the review, right? So a lot of times you might want to drop your opinions and your insights and questions and suggestions in something called working notes. So if you have new thoughts, new ideas or new questions if you just want to do something like a thought journaling then you can always read these working notes and you can put your ideas or your thoughts over there as we can see. Also, sometimes you might want to link up other kind of external references that may be some code or that may be some images or anything so to say, videos or maybe recordings or something like that which you might want to refer to the authors back in your review that these are the things that you might want to check out or see and so on and so forth. So that comes under repository. And so that's pretty much like how you would want to like go about it. One important thing that I want to share with you is that whatever I did over here you can do the same thing on the research paper also. So you can actually select any paragraph, any section of the paper and the moment you select that you get to see the same context menu and then you can click on all of this thing and you can use select and explore which is you select that particular selection section and then you can explore all the people's relationships that particular specific section, right? Of course, within the context of the paper or the manuscript. So now the finishing touch that would be your step three. So you need to check the structural and narrative flow and something that I see often because I do a lot of reviewing myself. So does the manuscript suffer from forward referencing? By forward referencing more definitely and this is also for researchers, research students who are watching this. Forward referencing is that you are using certain concepts or certain ideas or certain technique which you to explain something but then that concept of that idea or the technique has not been explained before. So this explain not later in the people. So let's say that you are trying to explain a particular thing in page number three using something else and that something else has been explained or introduced in page number seven. So that's forward referencing and that's very, very problematic from a reading point of view and that's something that you should try to minimize as much as possible, right? So we need to check all the stuff. Does the manuscript lack risk and complete descriptions or definitions of key concepts? Does the manuscript lack of formal tone? The formal tone that is expected in the specific community. Does the manuscript suffer from repetitive narration? So that's another thing that we need to focus on. And then finally, certain ornamental stuff like is the manuscript compliant with the required type setting and citation standardization for some for certain kinds of disciplines that may not be required but for many other disciplines is something which is very important and probably as a reviewer you might be responsible for checking this as well. So after that, when you finish your review you can get the score card. So essentially this is the score card over here in the info panel on the right-hand side of your review room where you are actually reviewing the paper. And whatever you select over here that gets reflected automatically in your list in your submission list. So you can easily check it out what kind of decisions you made for each one of those submissions. So this is how it looks like. You know, it can be strong, accept, be upset, depends upon, you know whether it is a revision then whether it is a minor revision or a major revision or a strong reject or big reject and so on. You can actually take your decision and then later on you can also change if you want to change your position. So let's review with regard but also with empathy. Now this is very, very important that at the end of the day we should not forget that we are doing it as a service to the community and we are doing it for the betterment of research and for the betterment of the researchers who are submitting the paper. So start reviewing with review assistance, this is the website and if you want to quickly, you know this is a code that you need to use and you need to get access because of the review that we are having. Thank you, Professor Dasgupta for such a wonderful and informative session. We are now open to take questions. We request all the attendees to kindly send us their queries using the question tab in the control panel. Okay, we have a few questions already. So the first question is, is the software freely available or have to purchase? So it's freely available. This review assistance is freely available for one year and because we want to get as much of your comment and your feedback as possible and if you wanna know, if you sign up early then you get it free for one year, definitely. All right. The next question is, a characteristic of poorly written paper is that they bring many unrelated ideas within sections. Would the section-wise summary algorithm detect this? Right now, no. I mean, this is something that's really, really interesting and it can be a very, very interesting research problem for us or the rat state. But right now, it won't be able to detect those things but it will at least be able to bring your attention to those unrelated things. It's very clear to you that these are unrelated. All right. The next question is, how do you deal with confidentiality of uploaded materials? So as you could see, I don't know whether you guys noticed it. First of all, there is no sharing capability. You can't share the manuscript with anybody. That's number one. Number two is whatever you are uploading is completely secure. We follow the standard GDPR and system EPA practices that we actually have. So anything and everything that you are putting in our server is all secured in secure containers. And that's not revealing. In fact, we don't even allow you to share that with anyone as far as the review manuscript is concerned. And it's all in picture and all. Also, your own reviews, you can actually, if you think that that's sort of like a contribution because that is a contribution. You can actually timestamp it with creative common copyright licenses that you can actually put over there as well. So that will make sure that, okay, this is something that is there. But this is something that's kind of quite interesting because right now, what do you do? Like, somewhere you are downloading it and somewhere you are uploading it and even you are reading it as a hard copy. So it's out there, it's not inside that submission portal. So the responsibility is with us, definitely. But the software makes sure that it doesn't get leaked in any way to anybody else. Right. The next question is for how long this tool will be accessible for free? I think it is already answered. Yeah, one year minimum one year. And we are trying to see how we can actually help you, but feel free to put your comments on what you feel for and so on and so forth, right? We haven't really figured out because there's a lot, I think there's a lot of things that there's a lot of feedback that will come from you. And if you help us to make like polish up the product a lot better. So right now we are focusing on that. All right, the next question, are you planning to make review assistant linked with Publons and Orkid or Research Gateway? Yes, we are. And that's the idea that it becomes like a seamless import and export of your review as well in the format of that particular journal or conference. So yeah, if it can do in Publons because Publons is like open review or openreview.net or even in close review, we want to do all of those things. So definitely these are the kind of feedback and these are the kind of ideas that we want from other people. Will review assistant help in references? References in what sense if I'm understanding it correctly then yeah, as I showed you that you are getting all the references that you're getting to validate whether those references are used in the right way or not by clicking on the summary of those references and quickly figuring it out. And also you are seeing whether there are other research papers that are really, really closely or very important key papers that are missed by the authors as well which are not a part of the references. If you select specific aspects of the paper and then search for the research papers. So I hope I got it correctly. If I did not get it correctly, please repose your question then it will be better. All right. After the first year, how much would the subscription be? Any estimation? Not yet decided. So we actually want to make sure that you start loving it. And but one thing I can tell you that if you really want to, if you really love it then it's not going to be true. That's something that we can share but we have just no clue right now. All right. When you say review, do you refer to a review paper? I can potentially write about a field or I am giving a review to a journal article as a reviewer. So it can be a journal article. It can be a conference article because in certain disciplines, you know even conferences are being reviewed. It can be clinical trial. It can be any kind of tech reporting. It can be a review on which you are getting a major review. So it can be a systematic review which you are trying to review. So it depends, you know, what kind of, so you can, and the summary actually the summarizer or the key incisor, it actually is able to understand, you know, what's the category of the manuscript and then tries to summarize it that way. So hopefully I'm in it. So we have done the, you know, the modeling, you know, ways such that, you know, most of the times it will be able to pick it up. The next question is, what extra precautions should be taken to get published in Q1 journals? So look at it, you know, I always look at, you know, so if you were a research student or a researcher, look at it from, you know, once you have written your first draft, try to put yourself from a reviewer's point of view and whatever I've shown you, step zero, step one, step two. Now think of yourself as a reviewer, you know, and try to be very, very critical about your own draft and then you will see that there are, these are all the, you know, precautions to say, like what are the strict steps, you know, if you're writing to a, if you're actually submitting to a very, very good place, then definitely the reviewers are going to try at least make a very honest attempt to look into all the checklist, then step zero, step one, step two, right? So you need to make sure that everything is correctly done. Then you need to give it to your lab mates and, you know, let them do the same thing and then go to your supervisor probably and get that done. But it should pass on a certain number of hands before you actually submit. But these are the checklists. It's nice to have such a checklist or a template. All right, the next question is, how do you export the review when finished? Yeah, so this is something that again, we're trying to work with different, you know, journal editorials and publishers and all to, you know, just one click of a button, it again goes back into the thing. But right now what you can do is you just can copy all, there was a button called copy all, and it gets copied into your click board and then you just paste it wherever you want to submit. That's how it is in this alpha version. But we are trying hard to talk. Yeah, I've already started a lot of discussions with many top journal and publishers to see how we can actually do it in nicely in their workflow. The next question is, did you use any NLP tools to create the model? And if so, what problems did you face? Yeah, yeah, there's a lot of NLP, a lot of machine learning, different techniques, you know, not just one technique, not just deep learning, not just, there's a different types of techniques that have been used. Like as an example, the suggestions come now from, you know, we use a certain kind of reinforcement Q learning framework to give you the suggestions. The retrieval that is happening is based on certain kind of, you know, core concept extraction and argumentation mining, so to say to figure out what, and to understand it more from a contextual point of view. Like what are the different context in which this particular term is coming and what, you know, so if you're looking for one particular concept, similar concepts or related concepts are going to pop up as well. So it's not just keyword based. And for the summary, we use a particular kind of, you know, machine learning based attention modeling technique where we try to make the summarizer, you know, keep pointing to the different important statements of the people and then try to figure out which ones have the most information content and also diversity, you know, because we need coverage also. We don't need to show you, we cannot show you stuff that are very statements that are quite similar to each other. So we need to have good coverage as well. So optimizing the information content while also mentioning the diversity. So that's kind of like how we are trying to work with the summarizer and the key insights is pretty much similar, but there is an additional layer where we are also trying to classify the arguments, right? And then try to figure out within those arguments which are the key statements again, to that point of mechanism. And the attention is pretty much inspired by certain kind of vision learning model. So that's sort of the thing that's going on behind the scene. So there are different techniques for different styles, you know, so starting from Q-learning which is being closely learning to, even, you know, a lot of deep learning techniques, shallow learning techniques, argumentation winding techniques, vision learning techniques, different things. Any other question? Hello, am I all you do? Hello. Yes, you're all right. Yes, there's one last question about the pricing. If an institution gets access to it by paying the fee, does it mean that it could tell all their reviewers to use it? In other words, would it be free for reviewers who belong to a certain institution? Yeah, I mean, if they find enough interest at an institutional level or even at a lab level or we don't know, then it becomes free for everyone. All right. Thank you so much, Professor Das Gupta. Our attendees have certainly gained a lot of critical information from this session. We would once again like to thank all the attendees for joining the webinar. Please find the exclusive code to access review assistant in the chat box. We would also request you all to please fill the feedback survey displayed after you leave the webinar. Your participation will allow us to evaluate the effectiveness of our webinar. Have a good day. Thank you so much. Be safe, all of you, and thank you for coming. Bye. Thank you.