 Thanks very much for coming on to watch my tutorial on Citation Chaser. I'm going to talk you through Citation Chaser, which is an R package in Shiny app. Today, I'm gonna focus mainly on the Shiny app, but the functionality in the R package is pretty much the same. You can access that through CRAN or through its GitHub repo and the links are down there on the bottom left. So first of all, just to thank and introduce the team, I've been joined by Matt Granger and Charles Gray in building Citation Chaser and also a special mention to Lens.org who provided free access to their scholarly API, which accesses the data underlying Citation Chaser. So what is Citation Chasing? Well, it's a really useful part of the searching process of an evidence review, systematic review or meta analysis. And it's a supplementary method to look for additional articles along with searching bibliographic databases. If we imagine we have one article, backward Citation Chasing occurs where we assemble the list of references from that article. And forward Citation Chasing occurs when we identify all articles that have cited our article. So you can see if we start off with a large set of starting articles, we can get a large set of results of all of the references within those articles. And we can find all articles that have cited that set of articles. And it's a useful additional way of mopping up articles that we might have missed by searching for keywords. It has a number of different synonyms. Some of them are up here. Sometimes these synonyms refer specifically to backward Citation Chasing or forward Citation Chasing independently, but they all roughly refer to some aspect of using citation networks to find articles rather than search words. We can start from various different points. We could start using our included relevant studies or articles in our review or if we've identified relevant reviews, we could use them as starting points. Or perhaps if we've assembled a benchmark set of articles that we know should be relevant and we've used that to test our search string, that could be a starting point as well. Currently Citation Chasing is not used very much in gold standard systematic reviews. In the collaboration for environmental evidence reviews, around 63% use backward Citation Chasing. None have reported using forward Citation Chasing and for 31%, the list that was used is unclear. That's from an assessment of just 16 systematic reviews. In Campbell, it's slightly better. 88% use backward. It's not clear how many used forward. In Cochrane, it was 87%, so similar amount had used backward Citation Chasing and only 9% had used forward Citation Chasing. And just 1.5% of those, the list of articles was unclear. So it's a bit of a mixed bag. Very few people use forward Citation Chasing and it could be done better. So that's where we built Citation Chaser. Some of the challenges though, that it's unclear what should be done in terms of which the starting points are and what the best practices are in how we do it. It's often done by hand. So some people print out PDFs or use digital PDFs to then search for individual records. If we do it digitally, it's quite time consuming because it's very difficult to do it in bulk. Some tools allow it already, like Scopus, you can put in a list of DOIs and see who has been cited by or who has cited that list. But they're based on individual databases. So the number of records is not comprehensive. And yeah, so even doing that, you might not get all of the references in the reference list for your articles, for example. So we've used Lens.org, which is a free to use bibliographic database or meta database. As of the start of this year, 2022, it had 245 million records and you can see from the schematic where those records mainly come from. It's mostly Microsoft academic graph, a lot from Crossref PubMed and then some from Core as well. And they're including more and more records. I believe they've got about six million from OpenAlex right now as well. So it's a very large database that looks across multiple different bibliographic databases. So it's a great resource to use. And we're making use of the scholarly API, which allows us to interrogate its core database. This is what Citation Chaser looks like when you come onto the landing page. And it has two ways of being used. You can directly put in your article identifiers. So either in the top six squares here, you can enter your DOIs, PubMed IDs, PubMed Central IDs, Microsoft Academic Identifiers, Core Identifiers, or Lens.org Identifiers. Or you can upload a list of identifiers and their type in a CSV file. You can find a template file to upload a CSV by clicking on help. Or you can upload an RIS file. So if you have an RIS file with records in and they have DOIs, so bearing in mind when you upload an RIS, it's just pulling out, stripping out the DOIs. You can upload an RIS file there. Then you click load my input articles and you will start. The other use case is a direct referral. This is useful if you're coming into Citation Chaser from another review management tool and this has mainly been developed for developers. So they can just add on the DOIs or any other identifiers after a question mark. So the question mark indicates that it's a query information that's ignored by the server really. But this is then extracted. You can extract a list of DOIs, which are comma separated DOIs there at the DOIs. You can have multiple different IDs as well by separating it with an ampersand. So we've got DOIs here followed by PubMed IDs. And then those are automatically used to populate a list of your starting articles. And then you can just get going on forward and backwards citation chasing. And you can see there the number of references and citations for your starting articles. So what I've done is I've prepared a set of 28 articles based on this search, which comes from a meta analysis or a review of reviews on greenhouse gas emissions and farming, which is just a practice example. And you can find these 28 records in, say, directs.ris. And if you go to article input, you can upload them as an RIS file. And then when you click load your articles, it shows you that 27 of those 28 articles had identifiers. So at the moment we can only use identifiers, but it's found 27 DOIs and it shows you which ones were found and which weren't. And then it populates at the bottom table of your articles. You can download that as an RIS file if you wanted to, but since we uploaded an RIS file, it would be identical. Then if you click on references tab, you will come to the backwards citation chasing option. You click search all reference articles in lens.org and it will automatically bring in all of the references from all 27 articles. Here you can see there were 2,300 in total, but that corresponds to 1,859 unique ID. So there was quite a bit of overlap there. And you can download that as an RIS file by clicking this white button here. Then if you click on citations, it will go away and find out which articles have cited those 27 articles. And it found 12,065 citations in total, which corresponds to 10,419 unique articles. So you can see there there's overlap of about 1,500. So the 1,500 times those articles were cited by the same other articles. You can download that RIS file of 10,419 articles by clicking on this white button here. Bear in mind that's going to take a while. So the older you're starting articles are, the more they will be cited. So just be aware that it will take quite a lot of time to bring back that many records, but it's still working in the background. And then you can also click on analysis to see which articles have been cited most frequently and you can get a network diagram if you're interested. At the moment, the functionality there is relatively limited. So you can integrate citation chaser into your workflows. For example, by starting with your included articles, relevant reviews or benchmark list, you could then de-duplicate the results of your citation chasing by adding it to your search results and then removing duplicates. And what you will get is anything that was missed by your search results or yeah, anything that was missed by your search results but found by citation chasing. So you don't have to go through all of your citation chasing results. You can remove the duplicates that you've already screened as part of your bibliographic search, for example. And then that's how you just look at those unique records that have been missed by the bibliographic searching. Another issue is the functionality in R. What you see in citation chaser is just RIS files, but if you do the same thing in R, you'll be able to access a much more rich data frame. And you can see a snapshot of it here. It's a nested data frame, so it comes from a JSON file. So you have external IDs, authors, author information like their affiliation is much richer in this file. So if you want all of that information, you can use the R package. And some of the future developments that we want, we want to sort of build on this analysis tab to allow co-citation analysis or weighting of the results or filtering of the results based on how frequently occurring articles were in your full network. We'd like to build in some deduplication against search results so that once you've got your RIS files, you can just remove all of the articles that you've already found. There is some overlap with other existing projects at the moment like ASSIST, which is a deduplication R package. So you can do this in other packages already. And we'd like to allow for titles to be searched for. At the moment, it's just using full matching of identifiers. Searching on titles is quite messy and it's quite labor intensive. So we haven't built that in yet, but we're considering how best to do that. So thanks very much. I hope you found this useful. Try out with the saved rex.rs file and have a go yourself. Check out the app on Shiny apps, the GitHub repo if you want to find out more about what's going on actively at the moment and make comments, suggestions or raise an issue. And you can find the project on CRAN as well. Thanks very much.