 All right our next presentation is entitled the value of accessible packages for stakeholders in government and it will be presented by Trevor Riley of the US National Oceanic and Atmospheric Administration Central Library and Kate Schofield of the Center for Public Health and Environmental Assessment at the US Environmental Protection Agency. All right thanks Sarah. Hi everyone, thanks for joining us today. As this presentation is pretty brief I'd like to say upfront that I'm really looking forward to the conversations that come out of today's talks. I'm continuously left with the feeling of awe after speaking with members of the evidence synthesis community especially those as part of the evidence synthesis hackathon community. I hope that the insight that folks gain across the various sessions of the conference will contribute to the idea of building a more lively evidence synthesis ecosystem that we're talking about today. I'm also hopeful that folks gain a better understanding of the challenges that different stakeholder groups face. This session was developed to provide some insight into our particular federal research groups and to provide some examples how we're integrating tools into the research process. So at NOAA we have a number of line offices that have the same level of name recognition as the agency itself such as the National Weather Service, National Ocean Service, and the National Marine Fishery Service to name a few. And if you're on NOAA's website you'd see a statement that says that the NOAA science stretches from the surface of the sun to the depths of the ocean floor. And this is really represented in the variety of requests that our research team receives. We've done literature reviews and everything from space weather topics to benthic invertebrates. And beyond that range of research topics that we handle we're also working with clients with a range of backgrounds and responsibilities. So any given day you could be working with a fisheries biologist, an economist, a recovery coordinator, a meteorologist, even the director of a national program. So while scientists in the field may be working on topics such as developing best practices and microplastic extraction and bivalves, a program manager might be using the literature to help determine funding priorities and how best to allocate time for research vessels. Two examples of projects that we've worked on. And so beyond that our clients are we're trying to ensure that our clients have the best available science. This is to make decisions. This is a mandate in the Endangered Species Act. And so for projects such as this we'll work to provide as thorough of a review as possible with the Full Text Library to literature. And in case you're wondering these two species are the common angel shark and the hawksbill sea turtle. Another couple of projects we've done. Another example as mentioned here on the slides is that our work is going into building other resources such as the Great Lakes Aquatic non-indigenous species information system for invasive species. This is a really great resource for researchers in the Great Lakes region. But our work also just doesn't stop at the hard sciences. We work with stakeholders who are legal experts. These folks serve as US representatives for intergovernmental working groups such as the Arctic Council. They're also economists and social scientists who use literature to gather a better understanding of the value of NOAA products and services. And examine things such as co-governance and co-management models with indigenous peoples. So it really is a lot. And due to our small size, we have roughly around three to three and a half FTE equivalents, we're really looking to develop tools and to integrate tools into our process that can help save time. And because of our size and the work that we're doing to educate our community, we haven't been part of a formal evidence synthesis process such as a systematic review at this point. However, our group has worked to developing processes and methods that take into consideration the best practices outlined in guidance such as Prisma S and Roses. And we see it as our responsibility to ensure that that process is transparent and repeatable. So this diagram is something that our team is currently working to finalize. So it's still a little in draft form. It's based on the idea of service design. And these types of diagrams go by the name of service blueprints. It has a number of uses, including helping our clients better understand our processes. In this case, it helps to highlight how we've begun to integrate the use of our packages into our work. You can see here at the bottom that our packages currently stretch across many of our processes, including the scoping, search strategy development, string development, the search itself and metadata cleaning. So again, our team is on the smaller side. And we don't have anybody with a strong programming background. So while we can play around with R and experiment a bit, for tools to have any real chance to be adopted into our process, a GUI is a must. So you may be asking yourself, if our team is reliant on packages out there with a GUI, what really is out there that is available? And that's really the point I'd like to get to is that these packages, as they develop, need to be accessible by groups that don't have that programming capability or that background. And they're also important because first of all, procurement and IT review and the federal government is very time-intensive. It can take up to a year, sometimes longer to get things through security for good reason, but they're also very costly because it does slow processes down. So I can't stress enough how important that aspect is. The ability to install and run a package the same day you learn about it or better yet use the GUI and actually start playing around with it speeds up the ability for us to adopt and educate our community about evidence synthesis practices. And second, a lot of these tools have been developed to solve a lot of the same issues that we're running into, specifically ASIS by Caitlin Hare, this deduplication package that was developed and presented last year's conference. And then also Litsearcher developed by Eliza Grims, who's going to be speaking here soon. And that's a package that we were able to, once Eliza had put out the GUI, we were able to integrate that into our standard processes and procedures very quickly. So again, that GUI aspect is very important to us. Okay, so I said this was going to be very brief. Before I hand you off to Kate, I want to share some of the things our teams that my team is looking ahead towards. We really see an increased uptake and demand for evidence synthesis at NOAA and support going hand in hand with that is increased need on educating the NOAA community on evidence synthesis methods, the value of the products, and then the different use cases for different stakeholders. We're also working to speed up the process of literature organizations to think about that Endangered Species Act, we know Corpus of Literature. So we're working on doing some automated literature classification through some topic modeling, again, hoping to use some of the tools that are being in development. We're also building an internal research citation library. It's a repository for all the gray lit just because when you're doing this type of research, especially environmental sciences, there's a lot of gray lit involved. So we're really looking for tools out there that can help clean metadata and make sure that we're not spending hours and hours just doing data entry really, putting a real value to use and being able to tackle more projects. And then finally, there's our package that is being developed as part of the hackathon this week. It spends a dub site source and that package is going to hopefully be able to track citation origins. And so this is something that might be able to be put to use in analyzing database efficacy and coverage. So as someone who's relatively new to the evidence synthesis world, these past sessions and the tools that have been presented as part of the evidence synthesis hackathon in the past have made a huge impact on my team's process and methods, shown me that there's a lot of folks out there who want to work on and answer the same questions that I have. So I hope that this presentation was for further discussion and I looked forward to seeing what the community does moving forward. And that being said, I'm going to hand you off to Kate. Thank you. Great. Thanks, Trevor. So I'll speak a bit about my experiences with evidence synthesis at another federal agency in the United States, the US Environmental Protection Agency, or EPA. The US EPA's mission is to protect human health and the environment, and it accomplishes this by many means, which are listed here. The ones that are most relevant for what I'll be talking about today are developing and enforcing regulations, studying environmental issues, and publishing information. Different parts of EPA are focused on these different actions. For example, EPA's program offices like the Office of Water are focused on the policy decisions involved in developing and enforcing regulations, while EPA's Office of Research and Development, where I'm located, is focused on conducting primary research and evidence synthesis. And ultimately, these three pieces, primary research, evidence, synthesis, and policy, and thus the different parts of EPA working on these pieces, need to work in an integrated fashion to support and put forth science-based management decisions. Next slide, please. My position is located in EPA's Center for Public Health and Environmental Assessment, or SOFIA, within the Office of Research and Development. SOFIA develops human health and environmental assessments that support EPA's program offices and region states and tribes. And so SOFIA is tasked with providing the science needed to understand how people in nature interact, to support assessments and policies that protect human health and ecological integrity. SOFIA's work centers on the evidence synthesis part of those three components that I highlighted on the previous slide. A large part of SOFIA is focused on the human health part of this, particularly in terms of assessing the health hazards of different chemicals through the Integrated Risk Information System, or IRIS program. The IRIS program has a well-established process for identifying and integrating evidence for health outcomes. SOFIA also includes a much smaller group focused on ecology rather than human health. And our group is often tasked with synthesizing evidence across a diverse set of topics focused on physical, chemical, and biological integrity of ecosystems. Some example assessments that our group has been involved with are shown here. All of these were initiated because a program office or region requested state of the science information about a topic of particular concern to them. The scope of these assessments can vary widely in terms of both the complexity of the topic and the approach taken. And so given this, today I'd just like to quickly highlight a couple of broad challenges that our group faces in our evidence synthesis work. Next slide, please. The first is navigating issues of assessment scope and scale. Sometimes we're asked to synthesize evidence related to a very specific question. I think of these assessments as taking a deep dive into a small pool. This might involve examining the relationships between one or a very small number of specific stressors and endpoints, such as the effect of total nitrogen and total phosphorus concentrations on stream algae. These types of questions may be well suited to systematic review, but we're often limited in the time and other resources we have available to conduct even these fairly finely scoped reviews. In other times, we're asked to evaluate evidence related to a much broader question, and our approach tends to shift to more of a shallow dive in a very big pool. Instead of looking at just a limited number of relationships, we need to simultaneously evaluate many, many relationships. The conceptual model shown here is one of 13 diagrams that we developed for an assessment of potential mining impacts in Bristol Bay, Alaska. So this might translate into what would be more than 50 systematic reviews or maybe one really big systematic map, but usually what we're after is an end product somewhere in the middle that's rigorous and feasible, given what are often really tight timelines that we're working under, and that provides us with some quantitative information about at least some of the relationships that we're most interested in. Accessible, easy to use tools, the methods that help us plan, adopt, and communicate assessments at both ends of this spectrum and really everywhere in between are incredibly helpful. A well-developed standard portfolio of evidence synthesis options with clearly identified tools and methods that can be used to save time and improve efficiency under each option would allow us to work with our partners from the outset and throughout the evidence synthesis process to identify what's feasible given different decisions about assessment scope and scale. Next slide please. And that leads me to the second broad challenge, which is navigating issues of process and perhaps most important communication. In our synthesis work, we're working with a diverse set of partners and stakeholders that often includes folks from a variety of organizations, state and federal governments, nonprofit organizations, academia and other stakeholders, and folks with diverse backgrounds, concerns, experiences and areas of expertise. Communicating across this diversity of players about both the science issues relevant to the focal topics and the process issues relevant to how and why to conduct an evidence synthesis is incredibly important but can be challenging given the varied starting points of everyone involved. As evidence synthesizers, we need to know about available tools and methods that can help at each of these process steps as well as the strengths and weaknesses of those methods and tools. We also need to be able to communicate this information to everyone involved, especially the people requesting the assessment and those who may be using it to inform policy decisions. Increasing awareness and knowledge about evidence synthesis methods and tools among evidence users, that is the people or groups either explicitly or even implicitly requesting evidence synthesis would help create some shared understanding of what's feasible, improve evidence users ability to structure their synthesis requests in ways that can better address the key answers they're looking for, and improve evidence synthesizers ability to help facilitate those discussions. And finally, I'd like to end by stressing the importance of evidence accessibility, that is making sure that the evidence incorporated into a synthesis is available to other users for other assessments and Trevor touched on a different aspect of accessibility, which is the issue of accessibility to the tools themselves, which is also an issue. But given the rate at which new information becomes available, and the fact that many of us are looking at the effects of similar stressors or similar interventions or other similar areas of inquiry, it's really valuable if the evidence accumulated in a synthesis is available to others rather than just sitting in files on our desktops. For many of the folks we work with this is especially important because they often don't have easy access to published papers. And this need for accessibility applies to both the synthetic results of an analysis and the individual bits of evidence that contributed to those synthetic results. This has been one area of focus for us over the past few years, really trying to build capacity to give other users both within and outside of EPA access to evidence we gather so that they can then apply it as needed to their own questions. So thank you for your time today and Trevor and I look forward to answering any questions you might have in the discussion session.