 Thank you for joining us and I want to tell you a little bit about our speaker, Melissa Rethlison. I know Melissa through the Medical Library Association, we work together on the Public Health and Health Administration section. But she has had a long career. Right now she is the Associate Dean of the Smathers Library and the Facler Director of the Health Science Center Libraries at the University of Florida in Gainesville and she's been doing that since 2018. In her role she also partners with the Clinical and Translational Science Institute at the U of F to provide high quality systematic review services to those researchers. Previously she's been several at several and really highly ranked institutions. One is the University of Utah. There she was the section director of the Systematic Review Corps in their CTSA program. She was director of the National Network of the Libraries of Medicine in the Mid-Continental Region, which at the time was based in Utah and well at the Spencer Eccles Library at the University of Utah. She began her career, which I really love, at the Minnesota Department of Health. If I'm right, then she became a librarian and that career started at the University of Minnesota Biomedical Library. Then she stayed in Minnesota but transferred to the Mayo Clinic Libraries in Rochester and the thing to know about Melissa is that she is all about the reproducibility of the systematic review search strategy, which truly as we were just talking it is the essence of systematic reviews. It is at its essence about reproducibility transparency. We have to see what is known in order for us to know what the evidence is and therefore advance it through the kind of work we do as librarians and you do as researchers and graduate students. Melissa is currently leading the development of an extension to the Prisma statement which is devoted to reporting the literature search strategy and that's what she's going to talk about today. I want to put in a public service announcement for her. She is hosting the Research Reproducibility 2020 webinar on December 2nd and 3rd. In the chat, I have her link to both register and a link to the agenda. Melissa, I think it's interesting that on the agenda is a topic geared towards social workers called Towards Reproducible Social Work Research, a graduate course on reproducibility, rigor, and meta science. Also in the chat, I'm going to put in a link to her scholarship in case you'd like to follow her too. You can follow her on LinkedIn, Facebook, Twitter, I don't know which one she prefers the most. But with that, I will let you start, Melissa. All right. Well, thank you for that introduction, Elaine. I will also plug the Research Reproducibility Conference. It was originally scheduled for March 17th in person and in some ways it's so fortunate that we were able to have it canceled because then it prompted us to move it online. And it's really designed to be an interdisciplinary conference focusing on how we can educate our students and ourselves on how to do research more reproducibly. So I would encourage you to take a look if that's something that's of interest to you. The reproducibility of certain strategies, as Elaine said, is something that I'm really interested in and is what led me to hosting these conferences. This is the third conference on research reproducibility that I've actually hosted. So I'm hoping it will be the best yet. And I would encourage you to register if you have any interest in that. All right. So again, thank you, Elaine, for inviting me to do this talk. This is something I'm really interested in. And I think Elaine wrote in the email that she sent out to all of you that I'm hoping that you'll have as much fun as I did putting this together in our conversation here today. And what I'm going to do is talk really about literature search reporting and specifically about using the Prisma S tool. And we'll go through some examples of how Prisma S looks when you're looking at real life strategies to kind of give you a clue as to why currently systematic reviews, which are really developed to be reproducible and transparent and reduce bias, have been falling short, like greatly short, and particularly in the area of literature search strategies. But then we'll get a chance to go through some and hopefully get your ideas on what you think are some of the good, the bad, and the ugly components of reporting some of these search strategies. So thinking about this whole topic, why is it important to even consider transparency and reproducibility of the search strategy component of systematic review? And Elaine already said systematic review searches are really the core methodology of systematic review without information about the SERP. You actually can't assess whether or not that it was done well. You can't reproduce it to see if you can get the same results. And you really just have no clue how a systematic review was done. So systematic reviews is I'm sure that you've learned in some of the previous sessions is a methodology that's developed to have these strict rules and regulations around how the methods are conducted. It's a methods driven methodology. It's not an expert driven methodology. And the search is one of the things that needs to be reproducible and transparent in order to be able to reduce that bias and be able to have people assess it. It also helps in terms of reducing duplicative research. So a lot of times a researcher might be reading an article. It's maybe a systematic review that was published in 2008, let's say. And the researcher wants to update it. Well, you wouldn't be able to update a systematic review unless you were able to actually reproduce the SERP. And so it's these updates that are often really good reasons to have a search that's reproducible, but also not just the updates. But let's say that you are a researcher. You'd see a systematic review that's published. They don't have the details of the search. You can't assess if it's any good or not. Then let's say that you decide, okay, based on the fact that you can't tell if this is introducing bias into the literature, you're going to redo it. And you're going to do it yourself. Because you would be able to trust the methodology that you're going to do better. Well, that means that there might be a lot of effort that was duplicated. If the search is not reported well, even if it was conducted well and designed in a way to reduce bias, it's simply not accessible to the readers and to future researchers who might actually want to utilize that information. So why Prisma Act? As Elaine mentioned, it's an extension to the Prisma statement. And it was really developed for a couple of different reasons. But the first, most importantly, I think was to give guidance to people who are writing systematic reviews and want to make sure that they're the information they're presenting is transparent. It actually came, was largely inspired by a project that I was doing about reproducibility of search strategies. And my colleague Jonathan and I, who are working on this article together, we get into these long debates about whether or not this specific thing was reproducible or not. If it said databases were searched from inception, is that the same as saying what the date range was? If it said that they were searched up until a certain date, are we supposed to ascertain that that's the date that the search occurred? And Jonathan and I would go back and forth and we would have all of these arguments about these small details. But they really were important to trying to figure out what was really truly reproducible and what needed to be available in a manuscript in order to reproduce a systematic review. It's part of our research, but also research that others have done. And if you came in a little early, you heard me talking to Elaine about some of the researchers who worked in this area. But essentially, estimates of true search reproducibility range from about 0.004 percent to maybe 14 percent of systematic reviews that can actually be reproduced. The smaller range or the smaller number was back in about 2008. But you can see it could increase up to 14 percent, but that's still very, very few systematic reviews. And this is all kind of in the medicine and health field are actually reproducible. In other fields, it's actually much worse. More importantly too, it builds on the Prisma reporting guidelines. So Prisma is a guideline that tells you how to report all different aspects of what you're doing in your systematic review from how you write the title to registering a protocol to all the different kinds of things that you've talked about over the past couple of weeks. But one of the things that it has done is list a few things that are related to the search strategy components. But it didn't give enough guidance for people who were new to systematic review methodology, for example, to be able to accurately figure out what they needed to do. So we developed Prisma S. So what is Prisma S? It's a checklist that has 16 different items that you would want to make sure that are accurately reported in your manuscript. And then it has an explanation and elaboration document. So I am going to quick jump to what that looks like. So Prisma S, this is the checklist of 16 different items. It focuses on several different areas, information sources and methods for strategies, peer review, and managing records. And all of these different items are designed to address either reproducibility or transparency in generally both. And I'm not going to go through every single one of these different items that would take us way too long. But I'm going to talk about four different ones momentarily. So the checklist is something that you can work with and you can submit along with your manuscript to make sure that people know that you have done the work of reproducing your strategy. I'm going to click on this. And I'm going to show you what the explanation and elaboration looks like. And this is a very long document. And what it does is it will give you the item, which is this top component right here. And then an example from the literature, from the published literature of a really well done systematic review that has reported something well. And then an explanation of why that particular thing is important to report, as well as a suggested location for reporting, which is down at the bottom. And the suggested location for reporting may say that it should be reported in the abstract and in the manuscript and in the supplemental few records, or it may only be in the supplementary materials, and so on. So it's just to give guidance on how you would actually go about doing that. But with a lot more detail that can happen in the checklist. I'm going to switch back to my presentation. Okay, so as I mentioned, I'm going to talk about four just a few items from that list. And I'm going to call them the big four. But really what they are is the bare minimum that is really necessary. If you're going to be reporting a database search as a component of your systematic review. So this doesn't get into things like great literature reporting, all of that is in the checklist, but I'm not going to cover it today. We're just going to go over these four. So the first one item one is name each individual database search stating the platform for each. And I'll get into that in a minute. Item eight, which includes strategies for each database and information source, copy them pasted exactly as run. And item nine, specify that no limits were used, or describe any limits or restrictions applied to a search. For example, day or time period language studies nine, et cetera, and provide justification for their use. And item 13, for each search strategy, provide the date when the last search occurred. And these are the four that I think are the most essential, again, for standard database only components of systematic review. So what does that look like? Okay, so for item number one, name each date individual database search stating the platform for each. When we talk about a database, we're talking about a database like Medline, for example, Medline is available to be searched on multiple different platforms. So depending on what platform you're using, your search strategy is going to be significantly different. So for example, just for Medline alone, you can search Medline through the web of science interface. You can search Medline through PubMed. You can search PubMed itself, which includes Medline, but it's not synonymous with Medline. You can search Ovid, you can search EBSCO, you can search all sorts of different platforms, every single one of which you have to search differently in order to achieve the results that you are looking for. And each one of these platforms will search the material differently, so you'll get different results in each of those platforms as well. It doesn't seem like you should, but that is in fact the truth, is that you won't be able to get the same number of results in each platform. So knowing what the platform is is critical to being able to figure out if you can actually reproduce a search. So in this good example here, this particular author did an excellent job of reporting all of the databases and the platforms. So they searched Ovid Medline with all of the different components, so you know exactly what slice of Medline via Ovid, which is the platform. They searched M-Base via Elsevier's M-Base.com, so there's other versions of M-Base that you could theoretically use. They searched Sinalpluff, which is a specific database from EBSCO, Cochrane Central through the Cochrane Library, it's part of Wiley, and Scopus as through Elsevier. And this one was done extremely well, and there's very few actual examples where you can find things actually done as well as the particular one. For the bad, unfortunately, this one, if you were maybe a casual reader, you wouldn't necessarily know that there might be a problem here, but there is, and it's namely around the listing for M-Base. Again, you don't know which platform that's at. PubMed and Scopus are both only available on one platform. The Cochrane Library is actually available on multiple platforms as well, and they don't list that here. So that's kind of the bad problem with this one, and it looks like Elaine would like me to pause. Elaine, do you want to just take over and show where the databases are? Yeah, thank you. I'm just going to share my screen here. This is the Mattis Library homepage, and this is where you begin your search to find those databases, but I think it needs a little bit of instruction to know where they are. So you would select resources here, and we've got several tabs that you'll find them on, but here's Medline which is searchable through PubMed, and it looks like that. Here is Medline which is searchable through another platform named Ovid, which is, you can see, looks entirely different. Also, I want to point out in Ovid that, Melissa, are you going to talk about registering protocols? Okay. Well, in Ovid, Ovid being a platform, we licensed several different databases in Ovid, and it defaults to Medline, but you can change them, and then you'll see all these other databases, and one that's used where people register protocols that you've heard about on previous and our previous webinars is right here, the JBI Evidence-Based Practice Database, and then you'll see these Health Star is a good one for health policy for those of you in the School of Public Health. Here is, I'm sorry. Go on just a second. If I could also throw in there, you'll see that in the databases that Elaine pointed out, these Ovid Health Star ones in particular, that each of those have different date ranges after them, and for Ovid databases, it's critical that you list what those dates are because that tells you which slice of a particular database that you're searching, and within the Ovid platform, there's many, many different options for each of these databases. Thank you, Elaine. Yeah, and I just want to point out that example of the good search that Melissa used, you see, it's not what this defaulted to. It's one of these with the EPUB ahead of print, so you do have to be thoughtful when you're selecting your databases, and that's why it's always a good idea to bring, ask for help from the librarian so that you could become aware of these. Here are other databases here, and keep, don't ask me why they're different. We are redesigning this, but these are other databases that are typically used in public health research. So I just want to point out how to get to them, but specifically from our homepage, I have this public health cross search here, and that is a search across these different databases. It's about eight different databases that through a survey I do, I'm able to, the survey asks of the faculty members, what journals are most important for your area of research, and then I track them back down to what databases the journals are found in. So this is another way to find databases. So that's all I wanted to say, Melissa. Sorry. Oh, no worries. Thank you for doing that. That actually was a very good demonstration of how complicated it is sometimes to just even figure out the name of the database, because each one is done a little bit differently, and like with Ovid, you'd have to worry about the date ranges as part of the name of the database. So that was a perfect segue there. I'm going to now go back to sharing my screen, and a little patient. Here we go. Okay. So again, as Elaine mentioned, this particular search did Ovid and Medline 1946 to present, et cetera, et cetera. And that is an important component to list because it will tell you in this particular one that they're searching for things that are published ahead of print. They're not just things that are indexed in Medline as of yet, and that's a big distinction to make. So that's why, again, it's very important to be very, very specific about your database. Okay. So now we get to the ugly. This is unfortunately quite common. And I'm just going to see if anyone would like to put in chat why they think that I think that this is awful. Any guesses or feel free to unmute yourself as well. Okay. Someone says no detail. Yes, there's definitely no detail. Searching preprints is okay as long as you are saying what the preprint server is. So that part I have less of an issue with, but we can discuss that later if you'd like to come back to it. Any other thoughts? Okay. So the big one is that they've lifted several things that are not databases. Yes, they are search engines and not databases. They're platforms and not databases essentially. So XCO hosts and ProQuest, both of which are platforms. Each of them, XCO hosts alone, you can search something like 800 different databases, ProQuest, dozens and dozens of different databases. And so if you're saying that you're searching XCO hosts or ProQuest, we have no idea what you actually search. So again, this is something that unfortunately we see a lot and it means instantaneously that this search is not reproducible and it's certainly not trustworthy. Okay. So the second one that I'm going to talk about item nine, that's the thing that no limits were used or describe any limits or restrictions applied to a search, such as date or time period language, study design, et cetera, and provide justification for their use. So limits are something that are actually built in databases. This isn't saying that as part of your search that you would put in your exclusion criteria, this is saying what did you actually do within a database to change how the results are. So really common ones are the ones that are listed, the date. So let's say that you had a specific invention. You wanted to test whether or not like this medical device, for example, was better or worse than another device that was invented on another date. You could say that you are limiting to a certain date range within your database search because of a reason such as that the device was not invented into that specific date, for example. Other ones that you'll see quite often are language and study design. From my perspective and the perspective of many systematic reviewers, you should never actually use any kind of limit at all within a database search. But unfortunately, it does happen. And if it does happen, then that's when you need to say that it happened and provide justification for the use. And if you were a systematic reviewer who was trying to have this unbiased search as possible, then you would say that you have no limits applied to your search. So in this particular example here, when I'm terming as a good example, they're saying that they're excluding these different study designs by excluding publication types, which is a limit that you can put into many different databases. They also say that they included all languages, and then that they removed non-English results during the review process. And that's a really important distinction to make because some people will say that they limit to English. And some people will say that their inclusion criteria is English. And sometimes it's not entirely clear as to whether or not the limit was applied in the database, which makes a big difference to the results that you get, or if they applied it afterwards by hand looking at all of the different results that they got. And so this one is very clearly written. And then they also say to improve specificity, the updated search, so that was the search that they did several months on the road to catch anything that had come up in the interim, was limited to human participants. Again, you should never limit to human participants. But at least they said that they did it, and they said why. So the to improve specificity here is the only component where they actually did write a justification, but at least they had at least some sort of justification. It would have been very helpful if they had written justification for the publication types as well as the, well, they didn't need one for the language because they didn't limit it by language. So the bad here, the bad is when it's not mentioned at all. Especially when it's not mentioned in the manuscript. Sometimes when you're looking at a search strategy that gets published with the manuscript, you'll go in and you'll say, well, they didn't mention they limited to English and they limited 1995 to present. Or they didn't mention that they limited to randomized control trials, which is a terrible practice. And so you have to really look at some of those details in order to figure that out. So it was always good to do the explanation of your limits in the manuscript as well as showing what you did in the full search strategy. Really ugly is when it's really confusingly written. So this particular one says a literature search was conducted in relevant databases, including these, published between January 1, 2009, and August 1, 2019. So this one sort of mind boggling like what they actually did here. Did they actually use a limit to limit for those different time periods? What's the difference between publishing and when they entered the database? There is a lot of confusing things here that this just the wording of this leaves open. So you don't know why they chose this time period. You don't know for sure how they chose the time period. Did they do it by looking through by hand? Or did they do it biologically using limits within the database? Yeah, and there's, yeah, the justification is definitely missing. And especially with a time limit like that, you really need to find a justification. Sometimes, frankly, people's limits can in no way be justified. And they know it other than the fact that they don't want to look through very many results. And so they're clearly inconspicuously missing. And which is probably the case in this particular article. All right, so the next one I wanted to look over is item 13. So for each search strategy, you provide the date when the last search occurred. This is surprisingly difficult to actually find examples of that where it's done well. You wouldn't think that this would be a challenge, but it can be. And the reason why you want to have a date when the last search occurred is that someone who comes along in 10 years and wants to update your search can search back to a certain date that something was entered into a database and then go and search from there. Or sometimes people will go back a year or two years just to ensure that they are getting all of the results. But they're not redoing all of your work. And that's just it's really nice to people in the future. But it also enables people who want to reproduce your result. You know, you actually reported things accurately to let them have a time frame that they need to look at to see whether or not you know your results match. And so it's really helpful to be super specific about this. It's very easy. All databases were searched September 13th, 2019. We did the journal table of content searches on October 10th. And then we did a final updated search on March 23rd. Very simple. Now when we look at examples where it's not as well done, then we have things like this one. So this is to say this top one is the exact same example from the previous search. We don't know, for example, did they search on August 1st? Or was it limited to August 1st? It's very unclear. And so, again, having that search date is helpful in making sure that it's incredibly transparent so that people can reproduce what you're doing. This other one that's below, the literature search was performed between November 1st and January 15th. Well, that's a pretty big range and would actually make kind of a considerable difference. Also, which database was searched on which date? It would be okay to have this in a manuscript if then you had all of the dates in your supplementary file so that people could say, you know, for Ovid, Medline, I searched it on November 1st for Mbase via Mbase.com, I searched it on December 15th, etc. That would be okay. But as is, it's currently not. But one thing that I find incredibly annoying and so I put it in the ugly category is one that I mentioned a little bit earlier in my talk, which is when people say things like they searched in these databases from the earliest record to a certain date. Well, does that mean that they searched on that date? Does it mean that they searched up into that date? In this case, I'm guessing that they searched on that date. But then really what they're trying to say is they searched everything published up to that date, but it doesn't necessarily get entered into the database at that point. So it's very, very finicky to try and sometimes figure out what the authors are actually trying to say. So clarity is good. And this one is like the simplest one to actually be clear about. So be clear about it, what I recommend. Okay, and then this one is the one that sometimes tends to be the most well reported these days. But it used to be one of the worst reported, which is to have certain strategies published with articles. So back in the day of the original Prisma statement, which came out in 2009 and prior to that, there was another reporting guideline called Quorum that it was based upon. Journal had this thing called print versions and it cost them money to publish the print. And so they had things like page restrictions and but when things started moving online more and more, journals started introducing supplementary files. They started being able to accommodate more information actually in a manuscript. And so now with Prisma S as well as with the latest version of Prisma, Prisma 2020, which was released in preprint a couple months ago. We are now recommending that all search strategies be included. So not just one, which Prisma and Quorum recommended, but every single search strategy that is searched, whether it's in a database, an online journal, Google, whatever, include every single strategy that you have. And the important part too is to have it be copied and pasted exactly as well. So I'm going to first show you the bad one here. This is probably a little tiny on your screens. But this one is bad because it doesn't actually have the search strategy copied and pasted as one. What it has is a couple of terms thrown into a table and it has no information actually about how they were combined. So like what kind of Boolean logic combined them. It doesn't give any indication if they were searched in different kinds of fields. We have no idea, for example, what this plus means right before community engagement. There's a comma. I'm in pretty sure that there's no database that uses commas except for embays using them and part of field codes, et cetera. But it's very, it's not clear what they did. They just threw out a bunch of search terms. It's totally useless information. Now a good one on the other hand might look like this. Let me see if I can quickly find a pair of one. This is an example of a supplementary file that includes actually everything that you would need to include in a supplementary file for a database search. So for this particular one, they have the, this is starting with the oven bed line search. They have the date range of the different oven file that they're using and they have the search strategy copied and pasted exactly. So you know exactly what they did, including the lines where they got zero results, like in this particular case. We keep scrolling down. You see it's a lovely beautiful search fully reproduced. You can see exactly at the end how many results that this particular search got. And then they go on to their next database. Then you keep scrolling. They go on to their next database. Then you keep scrolling. Then you get their next one and so on. And so they have reproduced beautifully every single search that they did. It makes it incredibly easy for us as readers to reproduce it and to examine what they did so that we can assess whether or not we think it was any good. Yeah. This supplement is particularly lovely. I'm not going to lie. So then this is the last search is the PubMed supplemental search. And you can see in the search that they even have the filters listed that they used when they were searching PubMed. So it's a particularly great one and beautifully laid out as well. Okay. So that was good. This is bad. Now for ugly. This is like just really, really ugly. They have the databases that they scanned for the words emergency and abdominal and surgery and geriatrics or elderly and mortality and comorbidity. And I'm not quite sure what they mean. What does that mean? Because you really can't scan Medline. Medline has millions of records in it. You definitely don't want to scan Google Scholar because you can't. You can only see up to a thousand records. So really it's not entirely clear what they did here. Now is that are those just some random keywords that they threw in? Did they actually do a search? It's really just not clear. So again, that transparency component is just super important. And I'm not going to share any of the sources to protect the innocent man to do not condemn the guilty either. So this is the FYI. I purposely don't have any citations in here. So now let's play a little game of your turn. The good, the bad and the ugly. So I hope that you all know that this is Clint Eastwood and he's a man with a no name, with no name in the movie, the good, the bad and the ugly. And this is the end scene in the cemetery. So yes, I did get every single example directly from published literature. So they're all out there in their glory. And it's really not that hard to find. Most of them, it's harder to find good examples than bad. Okay, so are we all set? Okay, so one question is if we also search Google scholar in addition to specific databases, the breadth practice for documenting that, again, you copy and paste that search strategy, you because you will have one, right? Or if you don't just use a certain strategy, but you do other methods in Google scholar, you would need to document what you did. And so I would look at the Prisma S explanation and elaboration, looking at some of the things around other sources in there. There's quite a few good examples of Google scholar searching that could be helpful for you in trying to figure out how you want to report it. Most of the time people use Google scholar as a database. And so it just needs to be reported in the same way. What's the date that you did it? What did you do for your search? Just copy and paste that search in. And what is the database name, which is Google scholar. And if you want, you can provide a link to it. There's some platform involved. And then if you use any limits, and Google scholar does actually have limits that are available. So if you use those, you would say that. If you don't, then you would say that you use no limit. Elaine asked if you can only search a thousand records in Google scholar, is it a good database? Yes, it is. One of my colleagues and I did research on looking at unique records found in different databases and Google scholar is one that does produce unique references. So yes, I would recommend searching it. Just make sure you document it properly. Okay. Now, any other last questions before I go on? All right. So this is your turn. So feel free to type in the chat if you feel more comfortable or unmute yourself and we'll go forward. So here we go. Is the search reporting the good, the bad, or the ugly? I'll just read it out loud as well. We searched PubMed and based Cochrane Web of Science plus Global Health Synol and Psychinfo for studies regarding et cetera, et cetera. The search strategy included key terms and medical subject headings for these subjects as defined by the World Bank. A sample of the search strategy is provided, a research library in support of the flexion of the search strategy to maximize results. The search was conducted February through May 2019 and the resulting articles were imported into RefWorks and then into Cochrane's and de-duplicated. Okay. So what do we think? Good, bad, ugly, and why? Thank you, someone, for saying good. Any thoughts on why you think it's good or what you think is good about it? Okay. Someone else is bad. It's very ambiguous. Not reproducible. Okay. So we have some conflicting opinions here. I'm going to show you the things I picked out and then hopefully this will get inspired to do some of the other ones going forward. Okay. So I would say that these are the things that I looked at. So originally we talked about the first item is the database with a platform. So we look here. We see PubMed. PubMed is a singular database. You can't really search it anywhere else. So that one's okay. And base, we mentioned that there's no platform here. Not good. Cochrane, no clue what that means. Is it Cochrane Central? Is it Cochrane Database? Is it systematic reviews? Is it Cochrane Library? Is it Cochrane Methods Effective? You just don't know. So very unclear, very ambiguous. Web of Science Plus. Web of Science is not a database. Web of Science is a platform bad. Global Health. What database is this? Is this through a specific platform? I think I can guess which one this is, but I don't know because they don't give me enough information. Stenol. Psychintho. Both of those, again, can and have been in the past searched in different platforms. Right now, Stenol is only available in one platform, but it still should be lifted here. The other thing that I noticed was that it said sample of the search strategy. This is like reading an immediate red flag for me. Sample of the search strategy. It should be all of the search strategies copied and pasted exactly as Ron. And if it's a sample of the search strategy, you don't know. First of all, you're not going to get the one for all of the different databases, but probably most importantly, you don't know if it's actually the one that they ran or if it's just some random one that they made up. So it's really not clear what you're getting if you're seeing a sample of a search strategy. And lastly, one of the things that we had talked about was the date. So if this one again says that it was conducted February through May of 2019, it could be worse, honestly. At least they have a date listed or they conducted a search. So it's slightly less ambiguous. It just it would be nice to actually have that date to date. So I would say that there's a little bit of good, a little bit of bad, but largely this one is pretty ambiguous. Any questions about that? The man with no name says that. Okay, not seeing any questions. Let's go on to the good, the bad, the ugly, number two. All right. The databases utilized were chosen for their coverage and included the following databases Eric, academics, or premier, sport, discuss, psychology and behavioral sciences, psych, info, education, source, social science, citation index, document database, systematic reviews and education profile. The systematic search table one, spoiler alert, it's the table I showed you earlier, was conducted in 2017 and included x, y, z terms identified by natural effort. Okay, what do we think about this one? I popped in quite late, so I'm not sure if this is even on point. But I wonder with how they wrote schools, health search terms, community engagement specific search terms instead of just saying that, should you be including the actual terms themselves? That's a great question. And this is something that comes up all the time. When we were creating Prisma Fs, we asked for every single possible piece of information that people saw might be necessary for reproducibility. And one that did not come up was anything about the narrative description. And in fact, we as authors greatly feel that putting search terms in, like example search terms is actually detrimental to reproducibility. So that part, I'm really glad that you mentioned that because that part is not as descriptive as you sometimes might see, but surprisingly, doesn't really have an effect on the reproducibility. If there are systematic search that was in table one was actually their search and not just a bunch of random terms, then we would have a better search. So thank you for volunteering and something, and I appreciate that. Thanks. Okay, so we have a couple of other no dates for the year of the search. Yes, thank you, Elaine. That is correct. It was conducted in 2017. That is super unhelpful. And you have no specification of how things were combined, which we had seen in the table earlier. And then someone said, is Education ProQuest a platform? I actually am not sure what Education ProQuest is. It's one I would have to go look up. ProQuest is definitely a platform whether or not they have a database that's just called Education or Education ProQuest. That one I'm actually not sure about. I'm not familiar with it. I did actually look up this one in the middle called Psychology and Behavioral Sciences. And interestingly enough, what I think it is, is a partial fragment of academic search premier. So they really didn't make the search a buzz of them, because they're already searching academic career. But they didn't list any of the platforms for any of these. Eric, you can search in multiple places. Academic search premier is only in one place. So that one, you could sort of guess. Fort is good. You can search in multiple places, like info, fame, education source, no platform listed. Social client citation index, you can really only search that in one platform mostly these days. But it still would be helpful if they listed it. Copy and database and systematic reviews. You can search in at least two different platforms. So again, they didn't really get that one well done. So these are the different areas that, again, that I noticed. So the lack of listings of, excuse me, of the platforms, no specific date. And then the fact that their search actually was not copied and pasted. In this one here, they, and actually the previous one, neither of them described anything having to do with the limits. So both of them are like, in the limits area, you really need to specify that you aren't using limits if you don't use them. So the man with no name on this one says ugly, the Seymour from the first one by like a long thought. Okay. How about this one? Look at the bad or the ugly. We searched Medline, Medline and Process, and based like info, all via Ovid, Web of Science, Thompson Rearers, CDFR in Central via the Cochrane Library, and Sinal Ebsco from 2004 to January 2017. The additional file has Medline search strategy. Literature prior to 2004 was identified via the Health Technology Assessment by the Aberdeen Health Technology Assessment Group, which this research project was committed to update. Any thoughts on this one? Okay. I'm not saying any typing. So this one, they did a really good job on listing the platforms along with the databases with one exception, which is Web of Science is not actually a database, it's a platform. So they should have listed what the databases that they searched and Web of Science were, not just listing the platform. They did this searching up to January 2017 thing. Again, not clear. You don't know if that was a date and limit that they put in, or if that was the date that they searched. It's very unclear based on how they wrote it. They do have a Medline search strategy that they included. It doesn't say it's a sample. So in a spoiler, I will say that that one was actually fine. And then they also say how they found literature prior to 2004. So that's the justification for why they actually had the limit of 2004 to begin with. So this one actually used limits. It had a justification for why they used a limit, which was that they used another source to find it. They did a relatively okay job with a few exceptions. So it's still bad because you still can't tell when they searched, but it's definitely approaching much better. All right. So here we go. Here's another one. The following databases were searched systematically. The search strategy took the following form. The searches were not limited by language. They were run from database inception. The approach to study identification is apparently reported in supplementary material. Any thoughts on this one? No one willing to commit themselves, I see. Okay. So it's kind of the same thing. This one, it actually gave a time when things were searched, but it didn't give a date. But it's closer. It's closer. They did a good job of doing all of the different platforms with the exception of, again, they said they searched Web of Science, which is not a database, it's a platform. That one gets a lot of people. So it's one to watch for very carefully when you're working on your own such medical views. They specifically say that they did not limit by language or date. And they do say that they transparently reported things in their appendix. And I believe in this case, they actually did do it. So this one, I would say it's... Well, I wouldn't say it. The name would say it. Okay. All right. So the question is, so what database should we cite when Web of Science is used? You should cite whichever databases you have access to. And Elaine, perhaps you in the background could pull up your version of Web of Science and we can get back to that in just a second. It's a good question. Thank you. All right. So one more, the good, the bad or the ugly. This one is a supplement. I just wanted to show you. What do we think about this one? I can scroll down a little bit. This one goes on and on. This is actually the supplement that was referencing the last document. Melissa, I'm going to break in here just to say that I've got to go to a physical therapy appointment. So I've left you all in the chat links. All of these links can be found on our systematic review guide. And, you know, Melissa, maybe you'll look at it sometime and make sure that the things you referenced are in that guide. I just want to thank you all for participating. And if you want to offer a testimonial for our next series, I'd love it if you would just write me a sentence and send it to me. And, Melissa, I'm going to hop off. I'm going to leave the Zoom on. Have a great day. Thanks very much. I'll just close by saying one thing, which is that that was one of the best reported searches that you'll ever see. And so though they had said in their paragraph that they searched in December of 2015, they actually in the supplement reported the exact date for each of the databases. They listed the host for each of the databases as well. Their one error was the level of science component, which I'm going to quick show you in right now. Give me a second. All right. So every single institution that has Web of Science has a different version of Web of Science. And they also have a different version of what's called the Web of Science Corp Collection. It depends on what your library purchases. So I'm going to go and show you what my library is. And then you can go and try and figure out what your library has that you might look at. So I'm going to go to find all databases. I'm clicking on Web of Science, which is a gateway to different databases. Right now, the database that it says that it's searching is Web of Science Corp Collection. If I click on Web of Science, let's see, more settings. If I go under settings, when I'm looking at searching Web of Science Corp Collection, it will tell me the different databases that are in my version of Web of Science Corp Collection. So I've got four different databases that I'm searching. And these are the different dates that my library has paid for as part of our license. Go to some place like Harvard, maybe even Tulane, they might have like 25 different databases listed here. They might have one database. Yeah, please. Yeah. Let me just quickly show you ours. Can you see my screen? Nope. Nope. Okay. I thought you were gone. I'm sorry, Elaine. No, no, that's okay. So this is, I'm going to go back and show you how I found this. Here is our home page. And here you select resources and it goes to find articles, Web of Science. And now to see what Melissa, so first of all, you see selected database. And this is a key thing, you know, it wasn't, I, one thing I had to learn in library school was to read these web pages because what you are looking for is not always what's on display. So here are databases. And you see for each one, it explains what they are. This is a comparable to what Melissa was showing us. These are the core collections. But you'll see that the dates were actually different between our two libraries. Ah, I did. Okay, well, now I'm going to go to my appointment. Okay, well, thank you for showing us that. And I think it's the differences were not quite sort of dramatic between our two libraries, but they can be very dramatic. So you always want to make sure that you're going to the my settings and that you would copy and paste all of those different databases are pretty key components. I am going to just quick turn my screen again, because I want to make sure that you have my information. But then I'm going to thank you all and ask if there's any additional questions. Okay, see none. All right, well, thanks all very much. I appreciate your time. And good luck reporting your searches for reproducibility and transparency.