 All right. Thanks, Kate. And our final presentation today will be by Sarah Young from Carnegie Mellon University Libraries and Eliza Graham from the University of Nevada, Reno. Their presentation is titled, Increasing Access to Evidence Synthesis Methods Through Tool Development and Capacity Building. Great. Thanks so much, Trevor. So as Trevor mentioned, I'm Sarah Young. I'm a Social Sciences Librarian based at Carnegie Mellon University in Pittsburgh. And my co-presenter is Eliza. Introduce yourself if you'd like. Yeah, I'm Eliza Graham. I'm a postbeck at the University of Nevada, Reno where I'm working on insect and bird conservation and climate change. Great. Thanks, Eliza. So as a librarian, I support quite a few evidence synthesis projects across a lot of disciplines. And today we're going to be talking about increasing access to evidence synthesis methods through tool development and capacity building. So next slide. So tools and methods of course can be built and developed for any step of the evidence synthesis process. But this presentation is really going to focus on the information retrieval phase of this kind of work. And information retrieval is really a foundational step that is essential to pretty much any type of evidence synthesis from systematic reviews to evidence and gap map scoping reviews, etc. So as a librarian and an information specialist myself, this is also the step that I'm typically most involved in when I support or conduct evidence synthesis projects. Next slide. So effective information retrieval really lays the foundation for all subsequent steps in evidence synthesis. And it's really the search that forms the initial set of studies from which ultimately the included studies in a review are drawn. And so it's really critical ultimately to the conclusions and recommendations that result from a review. Next slide. So there are a lot of challenges involved in the information retrieval phase, but any searcher encounters and pretty much any evidence synthesis project. And the decisions made around these challenges can really impact review findings, and potentially bias the eventual included studies in a review. So we know there's a general lack of standardized terminology in most fields, and so you can't simply rely on searching for one or two commonly used terms but you really need to include many possible terms to account for the variation in ways researchers in different disciplinary geographic or linguistic context refer to a concept. And also determining where to run searches such as which databases websites or search engines to use is also important and could, for example, impact the geographic comprehensiveness of a search. These terms not just in English but other languages can help mitigate the language bias in the published literature. And this can be really challenging if the person designing a search does not have fluency in a language that might be important to a particular review. So how do we potentially address these challenges with innovations and methods and technology and I'll let Eliza take it from here. So I think this whole session is kind of fun about tools and methods and what the challenges are. And so we can put together this conceptual diagram to ask, you know, whether these challenges require a new method or a tool. And so in this case the challenges choosing search terms. And you know it's not clear whether there's methods out there for this so you know like Elka described a lot of the work that they did with text mining, but it's not a specific method for choosing search terms which is part of the challenge. And so we're thinking about how to best choose search terms for a review. It's really we need a conceptual solution before we can move on to making methods or code. And that's where I kind of started from at one point during my dissertation trying to figure out like, what is the solution to this problem like how do we choose search terms in a way that's like quick and objective reproducible. And so I'm going to just briefly present an example here and to give you a little bit of context so that the words make sense. I'm going to talk about birds in fragmented forests. So the left here, this is a larger forest. This is a smaller forest. It's fragmented because they're separated by this probably power line. And this is a bird. They would prefer to be in the large forest, not the small forest. And so if I want to find a bunch of studies related to this topic, there's a whole bunch of different terms that I could possibly search for. I'm conceptually thinking about this, what you could do is go out search for some words related to this. So I could say, okay, this is a bird. It's sensitive to the area of a patch. It doesn't like habitat fragmentation so I could search for those words and get back a bunch of articles that use to those terms, but I know that's not all of the terms people could use to describe this topic. And so what I can do then is go through all of the titles and abstracts and keywords for those articles and pull out potentially good search terms. And so that's what's highlighted in yellow here is I could go through and say, okay, patch openness that might be related apparent area sensitivity. Here we have larger habitat patches, patch size. And so those aren't terms that I searched for initially, but they're related to the topic. And so we kind of need a way to go through all of these articles and pull back what those related terms are. And so that's kind of a conceptual framework. So it's like, okay, this will work, but it's not really a way to implement it. And so with that is the conceptual solution we still need, you know, methods to do this and then code to auto. And so the method that I decided to use is a keyword co occurrence network. And so each keyword, which for those things that I highlighted in yellow and the abstract is one of these circles, and then each line between them as a co occur. So two words appear in the same abstract, they get a line between them. And that means that they co occurred, and then we can use various ways to find the cut off and take just these terms in the center of the network that are the most important. Okay, now we have a conceptual solution, a method, and there's a whole bunch of really, really messy code that I wrote this probably somewhere to get hub archive that isn't really easy for anyone but me to use because you kind of have to read my mind that I didn't put comments in it. And none of the names of the functions make sense. And so it's not really that easy for other people to access this even though like, okay guys, I have a solution. And so that's kind of the next step in making evidence synthesis methods easier to access is turning them into an R package or a different type of tool in a different language. So maybe the R package literature, not going to go into all the details, but essentially it's an R package that implements that solution for finding search terms. And the way it works is it does the whole thing that I just described the conceptual model so you can import results, and then remove the duplicate articles using the R package synthesizer. And then it pulls out all those terms and suggest them. So, based on the methods that it uses, it says, you know, here's some possible terms you can consider. You can go through and manually group those so that the first three, I was like, okay, yep, that's a bird, that's a bird, that's a bird. This one, no, I'm not going to use it, it's too broad, and so on. And so you get back all of these terms. And then you can feed them back into a searcher, and it will write a full Boolean search. Because deciding where to truncate when to truncate what to put in quotes that can take a while and any typos lead to some problems. And so it will be that, and I don't expect you to be able to read that. And then, as Sarah was mentioning, it's really hard to choose languages to search in. So it also will suggest languages based on how many articles or how many journals actually are published in languages other than English for that topic. And so here it turns out that, you know, Dutch was very relevant, so I had it translate the search into Dutch. I don't know if these are the best terms to use, but at least help me pull back some relevant articles. And so if you want to learn all about the actual methods underlying it, not just the overview, read about it in methods and ecology and evolution. And so, you know, at that point as a developer, I'm like, okay, that's done, I made a package, everything's great. But that doesn't necessarily mean that end users can actually access the methods implemented in that. So I'm going to turn it back to Sarah to talk a little bit about that. Great. So one of the barriers to using these tools and I think we've heard this in our other two presentations as well is just a lack of specialized skills, largely coding skills. And there really is a quite a steep learning curve here when it comes to learning how to code both in terms of coding languages and also the associated software involved. And to some this can really feel like an insurmountable barrier, especially with those folks with no coding experience or maybe minimal bandwidth, where resources available to learn, or people working in an environment where there's few people around to have that have or need this skill. Next slide. So we address the skill gap. We know that just learning some basic coding can really empower end users like librarians and researchers to use these many open source tools that are being developed for evidence synthesis like literature. And while there's many free video tutorials learning platforms out there. This sort of asynchronous self motivated learning isn't necessarily right for everyone. But learning with others in sort of a synchronous live or online environment can really be more effective in some cases. And especially if that learning environment is supportive if it's practice oriented. And if it offers immediate applications to one's work. This can really facilitate long term retention of skills and knowledge. Next slide. The Carpentries comes in. The Carpentries is a nonprofit organization that aims to build global capacity for coding and data science skills. Carpentries really focuses on reproducibility and open source tools, and all Carpentries curriculum is built and taught by the community and is made freely available online. The Carpentries project is comprised of data Carpentries software Carpentry and library Carpentry. And together they provide really hundreds of hours of curriculum to teach our Python and other open source data science tools and methods. Next slide. So literature really clearly presents a great opportunity to improve search efficiency and minimize term selection bias. Both of which are concerns for very busy information specialists and librarians like myself, who support evidence work in many different contexts, as well as researchers doing this kind of work. But probably most librarians don't have coding skills. And so that can present a real barrier to using something like litsearcher. With library Carpentry in particular we saw an opportunity to build coding capacity and knowledge amongst this key stakeholder group and evidence synthesis work. By developing a Carpentries lesson and introductory are in an evidence synthesis context using litsearcher. And the credit for this idea really goes to Amelia Callagher. She's a social sciences librarian at Cornell University. That was really her idea. Bratt Eliza and myself together as well. And she developed a lot of the content as well. Next slide. So, this is probably I think the first evidence into this related Carpentries lesson out there and it was piloted to a group of virtual learners in August 2020. It provides learners with an immediate application of coding to their work conducting and supporting evidence synthesis live coding, hands on exercises and a well supported learning environment are all hallmarks of a Carpentries workshop. So the lesson is now freely available it's up in the Carpentries incubator and it's open for anyone to contribute and help build on. There's really a lot of opportunity I think with Carpentries to build other lessons related to evidence synthesis using, you know, many of the packages that come out of, for example the hackathon. And I really encourage folks to learn more about the organization and consider contributing. Yeah, so, you know, that's one really good way to make our packages easier for people to access is to actually provide training in R. But then there's one further step to making things easier to access which have both Alcatraz and Trevor all touched on, which is to make a GUI so a graphical user interface that I use it just called like a click and point so you can just click and point and not have to do any coding. And so, it's really easy to do this for our packages using shiny apps. So you can just write all the code in R and then output a graphical interface. And so I did this for let's searcher, because they kept, you know, thinking, oh, you know, it's great if out there it's in our package but there's so many people who can actually get into our. And so it does everything that literature does, but you just can do it by clicking and pointing so in. So if you're using it as a package, you type import results here you can just browse and upload a file so you can upload a bunch of bibliographic data, you can remove duplicates just by clicking. And then it'll extract keywords using the exact same logic spits out all the potential terms that you can see here. You can look at the network, not the most useful part of the doing but it shows you kind of that like what's going on behind the scenes. And then it'll suggest search terms based on how important they are in the network. And there's all sorts of options to fiddle with this and decide you know how important should they be before you'll consider them. And then in addition to just selecting search terms, it lets users write building searches by uploading everything, translating it doing all that. And then, oops, sorry, also checking the comprehensiveness of the search so is it retrieving all the articles that you think it should be retrieving. And so, you know, going from this challenge of how do we actually find search terms through a conceptual solution to a method to a package to click and point you know it's all just making it easier for people to access these tools and making it easier to do really quick synthesis. I'm going to turn it back to Sarah. Great, thanks Eliza. So basically wrap up, we talked about identifying challenges and limitations and determining whether a new tool or method is needed. Or if an existing tool or methods need to be more accessible communication between end users and developers is critical I think we've heard this from other presenters as well, especially in identifying needs and accessibility barriers. We focused on skill accessibility in this particular talk but there's really a huge range of accessibility considerations and a GUI isn't necessarily going to address all of those so certainly keeping other types of accessibility in mind. Ultimately we need more communication and collaboration between users and developers and end user needs should be taken into account when really from the early stages of tool development. So we're really looking forward to hearing some of the discussion on these topics about this and really thinking about ways that the sort of collaboration can be facilitated. So I'm going to turn it over to Alex for discussion questions. Thanks everyone.