 Welcome to the webinar on Building Tools at the Boundary of Climate Science, Security, and Action. I'm Denise Ross. We're going to get started in about five minutes. Thanks for holding on. Am I muted? Hi, everyone. We've got about, we'll get started in about one minute for the Building Tools at the Boundary of Climate Science, Security, and Action webinar. Thanks for holding on. Greetings, everyone. Welcome to the webinar on Building Tools at the Boundary of Climate Science, Security, and Action. I'm Denise Ross with the Fade Zero project at New America. First, a few housekeeping things. All attendees are on mute, but throughout this webinar, you can submit questions through the Q&A button located at the bottom of your screen. We will answer questions during the last one-third of the webinar. There'll be plenty of time for Q&A. After the webinar, we will email you with the recording and write up from this session so you can share it with colleagues who are not able to make it. This is the first of three webinars on this field. The next will be on the role of open data and open source development in building digital tools that span the boundary of data science and action. And the third will focus on how to go beyond just delivering bad news about risk and offer users solutions about actions they can take. As I mentioned, I'm Denise Ross. I'm the data strategy lead with the Fade Zero project at New America, where we're looking at root causes of war, specifically the role that climate change plays as a threat multiplier. I've spent nearly 20 years of my career building organizations and digital tools that bridge data and action. For 10 years, I was with a local data intermediary in New Orleans. That time included Hurricane Katrina and the actionable data that we published was crucial for aligning efforts for the recovery. I spent several years then with Mayor Landry's administration in the city of New Orleans and launched their open data initiative. And then I joined the Obama White House where I led crowdsourcing efforts on data about energy for disaster affected communities and co-founded the White House police data initiative in the wake of the Ferguson protests. I mentioned all of this background primarily to illustrate that I've waited far too long to connect with the academic researchers who study this boundary of science and action. So I'm honored today to have with us Dave White from Arizona State University. And because he is an applied expert in the field. Until this last year, I'd always thought that the most important thing about democratizing data was that your data needs to be credible. However, I've seen so many websites and digital tools with credible data, but they're not getting used. There must be something more going on. So once I dove into the literature about boundary objects, I understood why so many attempts to span the boundary between data science and practice have failed. Dave will go into these concepts more thoroughly, but for a quick overview, credibility covers the validity, reliability and technical accuracy of knowledge and analysis. But then that's important, but it's really a small piece of what's needed to connect with end users. Salience refers to how relevant the information is to decision makers needs and legitimacy is user perceptions that the information is respectful of stakeholders diverse values and beliefs. If a tool fails in any of these categories, the data are unlikely to make it into decisions. And in fact, this framework from the literature turns out to be extremely relevant in solving some of the key barriers we found at the phase zero project in creating tools that deliver actionable data to people on the ground, making decisions in the field of climate security. Before we started to build any tools at the phase zero project, we conducted extensive research on what tools already exist in the climate change natural resources security space, casting a very wide net. We accessed more than 60 publicly available tools and interviewed two dozen tool makers about what works and what doesn't. Then speaking of what wasn't working. We asked 30 climate security experts about barriers they've noticed in tool uptake by security professionals. We teamed up with the Center for climate and security and as experts at the planetary security conference in the head, a question, what are the barriers to turning data about climate security risks into concrete actions to prepare the top 10 barriers identified fall nicely into these categories of credibility salience and legitimacy. Just to cover the top item in each category, one huge challenge to the credibility of the science connecting climate change and conflict is that there's a lot of uncertainty in the science around causality. And it's easy to get muddles and muddled in that scientific debate. On salience, one key thing we hear from national security professionals is that even if climate change is clearly going to be a threat multiplier addressing that challenge is quote, not my job. On legitimacy, there's often not the political will to trust the science or the data, and there's a large cultural golf golf between experts in climate security and the people who make decisions that could head off the worst outcomes. From the 60 tools that we studied and two dozen tool builders we interviewed, we have two broad findings. First, most tools are built for raising awareness and not informing action. Very few tools address solutions, but we know that a key part of spurring actions is making it clear what those actions might be. Most tools are also focused on tech and data and not users and decisions they need to make. This is totally understandable as the science and modeling are complex. And sometimes you just need a room full of engineers to figure it out. However, that pivot from technology centered design to human centered design is a tough one. And Dave will make the case that users should be working alongside those engineers from the get go. One example of how technology centered design manifests is that global data sets often only get to the country level and are published with lags of one to five sometimes 10 years. But users on the ground consistently say that they need greater geographic and temporal granularity and able to in order to use the data to make decisions. This is an area where the technical limitations are driving the design of the tool and the resulting mismatch with user needs means that users can't use the tool to turn data into action. For the remainder of my presentation, I'll share three examples of each of where each of the tools are bucking these two trends and doing a great job of informing action and focusing on user needs. First off, I'll cover those tools that have made the leap from data to action. These include global forest watch ASU's decision center for a desert city and FAO's early warning early action. Global forest watch is an online platform that provides data and tools for monitoring for us. The website has alerts that can be customized so they are highly relevant to the geographic area a user is interested in. The alerts are also explicit about whether an alert is confirmed or unconfirmed. When you can clearly communicate the limitations of the data, then it's more credible for users, as long as you don't overwhelm them with the uncertainty. Global forest watch has an action network already in place on the ground that uses these notifications to notify local authorities who know how to take it from there. They don't need to connect with all of the local authorities individually if they're tied to intermediary users like the Amazon Conservation Association. ASU's WaterSim is a model-based decision support for water resources. Dave's team at ASU works on this tool. They have a bank of 27 policy choices behind the scenes around water and configure the tool with the five choices most relevant for each region or municipality they partner with. For example, given the parameters for the five policy choices on the left, wastewater reclaimed farm water used by cities, population growth, etc., the graph shows what the sources of water will need to be for the Phoenix region to meet demand. Users can choose to adjust the inputs, for example, by choosing to reclaim a higher percentage of water. The tool designers built in just-in-time learning, so users who are not water management experts, can understand each policy with a real-world example and some information on how to implement it. Now, the UN Food and Agriculture Organization also has a tool. They call it early warning, early action. Their forecasts of new emergencies or deterioration of existing conditions come out quarterly. What makes their tool stand apart from most risk forecasting is that they also recommend early actions to address that risk. For example, for the quarter ending this March, they forecasted that Yemen was at high risk for famine and made specific recommendations, such as enabling livestock vaccinations to protect the assets of pastoralists. What's especially powerful about this EWEA tool is that they are also tied to a funding source to help implement the solutions they're recommending. In 2015, FAO tested their early warning, early action system for the first time, driving funds to the Chevelle River in anticipation of catastrophic flooding. These early interventions that they recommended yielded nearly a 400% return on investment and demonstrated the value of this type of information for decision-making and resource allocation. Moving on to the final three case studies. As I mentioned earlier, given the complexity of data, science, and technology, many tools have been built in a technology-centered rather than a human-centered process. As the data and technology mature, we're starting to see a shift toward users and designing tools for the tasks they need to accomplish and decisions they need to make. This often means a shift in audience from something generic like policymakers and the general public to something more specific like sustainability directors in city government or even owner of a bottling plant in Georgia. I'm going to dive into three tools that use an approach of building these tools with not just for the end users. A side benefit of this user-centered process is that working with clients or partners means they often bring smaller area data or more recent data to the table, making the resulting tool more salient and legitimate. Earth Genome is a nonprofit organization that worked with seven companies who are a member of the World Business Council for Sustainable Development in order to reveal options for corporate decisions on water use. Their pilot project was with Dow Chemical, who has a facility in the Brazos water basin that was suffering from a years-long drought, and they were considering expensive gray infrastructure solutions. They asked the Earth Genome team if they could build a tool to help them find green solutions across the entire watershed because what was happening upstream was affecting them downstream. Essentially, they wanted to use data to find a needle in the haystack in places where it makes sense for them to invest in the entire watershed. Earth Genome worked alongside the client to make sure they built the tool, built into the tool the data that would help Dow make a decision about where to invest in green infrastructure. For example, if they invested in restoring the wetlands just north of the Harris Reservoir, here's the projected data on what the costs and benefits would be. For a business like Dow, a financial analysis is crucial, including what the internal rate of return will be and how many years it will take to recoup their investment. Having the financials allows them to make decisions from this data. Building a sophisticated platform like this for one time use with a specific client would typically be cost and time prohibitive. So this is Earth Genome's slide describing their process, but I'd like to call your attention to the bottom part here. What they're doing is creating a reusable platform for the data, science, and visualization, and then customizing that platform for their engagement with the end users. You might think of this as tool-enabled consulting, and it's a promising way to create a tool that's both relevant and legitimate in the eyes of your end users, though it does shrink your end users to as much smaller universe for each lightweight tool. But then you can more easily spin up custom tools for different audiences. The audience for this one was only Dow executives, for example. In this and other examples where tool builders create a platform that does the heavy lift of collected data sets, cleaning and normalizing the data, building models, and reusing reusable interface modules. And then they can build thin, lightweight apps on top of that platform in collaboration with each of their clients or groups of end users. Another tool here is first and foremost a country-level index of risk. However, they have a formal process for creating subnational data products. Here's an example of work they're doing to map risk at the municipal level in Honduras. The darker areas represent municipalities at higher risk from disasters. Note that key partners in this work are the government of the Republic of Honduras and the organization responsible for coordinating public and private disaster relief efforts. We're seeing the theme that we see in all successful downscaling and that is the engagement of national or subnational stakeholders. In fact, the informed team is super clear about how this works. The work is locally owned and managed with the technical team that informs headquarters playing a supportive role. They're cost-effective because they're building on top of their existing informed platform, and so they can focus on what's unique about the area they're working in rather than wrangling the technology and design into shape for each subnational engagement. And MapX similarly is a platform that can be customized with local data to meet the specific needs of different partners ranging from a drought impact needs assessment in Somalia to mapping environment and energy in Haiti. As we inform, the MapX team has a formalized process for engaging with their end users to create a tool that is salient and legitimate. In their process documentation, they write that helping stakeholders access and use the best available data for decision-making requires both a technical approach, which we all get is important, as well as a political process and capacity building to ensure ownership, uptake, and action. Many digital tools are not venturing into this political process, and thus fall short of making real outcomes. Here's MapX's formal engagement process. They start with users and the problem they need to solve. Then, together with stakeholders, they co-design the data acquisition process and what a fit-for-purpose platform would look like. And then only then, in the third step, does the tech team go back and develop the tool. Lastly, they deploy the platform into a stakeholder process and train users on using it. When you're tackling a big data science challenge like what we're doing with our colleagues at WRI and creating a model that will forecast the role of water stressors on future conflict, it's tempting to get all of your engineers together in a room to solve the problem first. That might get you scientific credibility, but it's this larger process defining the problem with your end-users and co-designing the tool with them that gives a tool salience and legitimacy and results in the uptake of the tool by decision-makers. To wrap up the key findings from our research, if you want to build a tool that turns data into knowledge into action, first, design beyond the problem to include the solutions. Build with, not for, end-users. And lastly, launch tools into existing networks and decision-making and political processes. The next step, Dave White will bring these empirical findings to the next level with a dive into the science of building effective tools and organizations that span science and action. Dave, it's all yours. Thank you, Denise. I'd like to thank Denise and New America for organizing this webinar, and I think it's an excellent contribution to the discussion and effectively links the work that's going on at New America and your partners along with the academic community. I'd like to encourage the participants to add questions in the Q&A box so that we can address those at the end of the webinar. Again, my name is Dave White. I'm a professor in the School of Community Resources and Development at Arizona State University. I also serve as the director of the Decision Center for a Desert City. In addition, I am an affiliate with the Global Security Initiative, which is an effort at Arizona State University to link security, sustainability, and science to help solve wicked problems throughout the communities of the defense and intelligence sectors. My talk today will focus on the work that we've done at DCDC, as we call it, to co-develop boundary tools for sustainability and security, and I'll offer some lessons learned from the efforts that we've conducted. So first, I'm going to introduce the notion of boundary work. A boundary work was first developed in the sociology of science and applied within both social sciences and political sciences to understand efforts to enhance and improve the linkages between science and knowledge and action for a variety of different domains, including climate security, but also I'll mention efforts that we focused on sustainability. So boundary work includes several elements, especially the cooperative production of knowledge between multiple different stakeholders, as well as the efforts to build organizations, boundary organizations that span the policy and science communities and the different social groups and efforts to organizationally or institutionally link those communities. And third, I'll talk about boundary work as a way to co-develop boundary objects, and here we're talking about tools at the interface of climate, science, and security as one type of boundary object. So to address each of these points independently, first we need to focus on the cooperative production of knowledge by multiple different stakeholders. My colleagues, Maria Carmen Lemos and Morehouse have noted that this co-production process requires substantial commitment to three components, including interdisciplinary science production within the university community or within the knowledge community, stakeholder participation, and the production of knowledge that is demonstrably usable. So getting back to these criteria of credibility, salience and legitimacy, these are important matters when you're seeking to co-produce the science. Second, we want to focus on the design of organizations, and so we want to build our institutions and our organizations in such a way that they include participation from multiple different stakeholder groups, importantly, as well as professional intermediaries. And so, for example, at Decision Center for a Desert City, we have a full-time stakeholder engagement liaison, a professional who has a PhD, who is trained in science, but also spent 20 years working in practitioner communities in environmental planning and water resource management and land use planning, and that person serves as a key intermediary who understands the social dynamics of both of the different communities. These boundary organizations provide opportunities and the incentives for the creation of boundary objects, and they have accountability to both their political and scientific and other stakeholder communities. They help to frame and define problems, including, as Denise mentioned, importantly, issues related to temporal scale and spatial scale. They capitalize on the advantages of scale by bringing together multiple organizations to, for example, create data sharing agreements, and they help to manage the productive tension that can exist between these multiple social worlds. Next, we focus on the creation of boundary objects, such as models, decision support systems, and advanced scenarios that provide the opportunities for different stakeholder groups to discuss, negotiate, construct, deconstruct, and reconstruct the tools that they're building. So model-based decision support systems are becoming a very popular tool for linking different facets of science with decision making. So the next part of my presentation is going to ask and answer a series of social science questions that have been the focus of our interdisciplinary work over more than a decade focused on this topic within the context of water sustainability and security within the Colorado River Basin of the western United States. So first we ask, what are the most effective ways to initiate and run these kinds of organizations? And I understand many of you on this webinar are involved in this kind of work. So how do we engage in these? So one example that comes from our work at DCDC is to focus on these tasks in the intersection between science and policy. So among the tasks that we need to accomplish are to reconcile the priorities between the scientific audience and communities and the policy communities. And so there are a variety of ways to do that. We do that through a series of monthly what we call water climate briefings, which are typically panel discussions that include 50 to 60 stakeholders where we discuss the challenges that are being faced by the end user communities, including, for example, municipal water management agencies or federal government agencies that deal with security and sustainability. And we reconcile their priorities with what are the interesting and challenging science questions that are being posed by our university researchers. We then try to maximize scale dependent data advantages by data sharing agreements. We create visualization and simulation efforts using socio ecological modeling. And then we work on collaboration and deliberation to provide these visualization and simulation in a way that supports specific decisions. And it's an important point to know that when you're trying to create these boundary tools, you need to have an understanding of what the decision processes are and the mechanisms for the actual targeted communities where you would like your work to affect. So we ask ourselves how do social networks affect the use of knowledge, the uptake and the tools that we're producing. And what we've concluded based on conducting social network analysis is what may seem like an intuitive result that policymakers with greater numbers of contacts with those academics or those producing the tools are more likely to utilize the information produced within the tools to govern in this case water resources but in generally to make in general to make decisions. And so for example, we have the director of the Department of Water Resources and the director of the city of Phoenix water services department as policy advisors and integral members of the model development community. At each stage, when we develop our work, it's, it's deconstructed and critiqued by these end users, and then that leads into an iterative process of redesign. We ask ourselves how different social groups differ in the terms of their environmental concerns their risk perceptions, and importantly their policy preference. You need to understand how these different groups understand these different challenges. And so for example, one of the pieces of work that we've conducted included surveys of different groups including members of the general public policy experts and academic scientists and scholars. And what we did was assess their preferences in this particular case this graphic is showing their preferences for different policy interventions. And you could see to point out one specific example, increasing the price of water to affect water demand was strongly supported by academic scientists and to some degree by policy professionals but much less so by the general public. And so this helps us to understand what tools we should put in and what policy options should be included in our tools and how those are going to be received by different communities and also to build as Denise mentioned customizable interfaces that might allow certain sets of audiences to interact with one array of policy options and other audiences to interact with a different array of policy options. In a similar recent survey we asked a random sample of residents in three western US cities, Las Vegas, Phoenix and Denver, their perceptions about different strategies for transitioning their water management approaches from their current state to a more sustainable state. And understanding these different preferences among different audiences helps us to customize our modeling and tool development to be more salient to the decision makers in each of those regions because the support that they can expect to receive from their participating public is critical for us to understand how well those policies might be implemented. We ask ourselves what techniques are most conducive for eliciting feedback from stakeholders. So as we're making the argument that you need to engage these different stakeholders, how do we solicit information. We've done that through a variety of techniques including semi structured interviews, survey questionnaire and focus groups. We found in one study when comparing focus groups with individual interviews that each of these different techniques had advantages and disadvantages when soliciting impact input from the end user, especially on politically sensitive topics and climate risk can be included among those politically sensitive topics. We found, for example, that responses in focus groups allowed for more collaborative deliberation on very sensitive topics so long as the focus group participants perceived that there was a clear problem solving focus as part of the mediated those focus groups than they did in the open ended self administered questionnaires and interviews. We ask ourselves questions like how do the stakeholders mental maps, their frames, how do these positions that they bring to the tool construction process, and this includes not only the stakeholders in the external community but also the scientists and how their perceptions of the problems and potential solutions affect the design of these kinds of tools. One of the things that we found is that the way that stakeholders understand these issues have implications for the relevance or salience of their, of the tools to their decision making as well as the scientists understanding has importance for how the tools are designed. And so in this case, this is showing a code co occurrence model. And what this means is based on dozens of interviews with stakeholders involved in food, energy and water decision making within central Arizona. This is a mental map of how stakeholders understand the nexus between issues of energy and water. And what are the critical components like understanding the physical infrastructure system of groundwater pumping conveyance recharge etc. But also the social and economic dimensions of cost and the allocation through the legal system of water to land. What we've done is taking these mental maps by stakeholders and given these mental maps back to the tool designers to have them reconcile the stakeholders mental maps with the engineering understanding. Similarly, scientists in another study framed the issue of sustainability in a fairly simplistic and scientific and technical understanding, looking at a balance of water supply and demand, and that was what they interpreted as sustainability. This was a restrictive view when compared with other stakeholders and didn't necessarily allow our tool to open up discourse to novel solutions, because the solution set was constrained from the outset of the design of the tool, and the tool was redesigned after making this conclusion. A couple final points. We want to know how stakeholders and scientists can cooperatively develop these model based decision support systems based on the evidence of the research that we've conducted. Well, first we need to know that this boundary work requires integrating your scientific concerns with other concerns. And as I've said to construct and deconstruct to frame and reframe these models and so it takes a multiple double loop learning process through the multiple different communities to ultimately end up with a tool that matches the criteria that we're seeking. So in our particular case, we've used the decision theater at Arizona State University, which is a visualization and engagement environment to present a generation of the tool, have it critiqued and deconstructed by the end users and stakeholders, and then we move back to the designers for new generations of the tool. That helps us to divine the design not only interfaces like the interface that Denise introduced for water sim, but also it allows us to do simulation modeling with our tool offline to to use ensemble scenario analysis to create and analyze multiple different futures. So a couple final slides. This shows the results of a multiple scenario analysis where we're looking at hundreds of different simulations using our model and looking at the mean response over time to the regional aquifer based on the mega drought scenario, which is a drought that we anticipate could last 30 years or longer. In the year 20 of a long term drought in the western United States. If we project out this mega drought scenario, what would be the mean response from the aquifer. And then also we can look at these different policies like growth management, demand management, water banking increased reclaimed water use and look at the sensitivity of the outcome to the different policy choices. And we make sure that our assessment criteria are relevant to the decision makers in a different scenario analysis study. We looked at the effects of different policy interventions across a range of future environmental conditions on population growth within different cities in our region. And this shows the different spatial impact of the same set of environmental conditions on a series of different cities within the same region. And this is a result of us learning from our stakeholders that they needed this kind of spatial resolution for the decision tool to be relevant to their decision making processes. And finally, how do we assess the outcomes. We try to serve as an honest broker. We try to enhance the institutional capacity of our stakeholder partners, and we try to detect where our research has influenced plans policies and infrastructure decisions. And one example is from one of our partners who said to us in an interview study that the research and workshops associated with the center serve two critical roles for those involved in decision making. One is to operate as an honest broker and clearing house for data and concepts. Another is to make people understand through the scenario analysis that although predicting the future with precision is difficult. These organizations can benefit greatly by preparing a variety of likely outcomes and looking at the effect of the different decisions across that range of outcomes. And with that, I'll close my part of the webinar and turn it back over to Denise to moderate our questions. Thank you, Dave. That was, that was fantastic. And, you know, every time I hear you talk about the research that you're doing I'm reminded of some situation in the real world that I didn't have a name for at the time. But it really clicks well when you were talking about reconciling stakeholder views and the views of the engineers. I'm remembering a time when I was working with the police department, and they wanted to open data. So that would be useful for the domestic violence and sexual assault survivor community advocacy community. And so we were convening an event to bring together the experts in the domestic violence and sexual assault field alongside the, the, the public safety folks and technologists. And as we were preparing for the event, the data person and in the police department was very proud because he had filtered out all of the, all of the calls that were that were redeemed to be what was the word that that that were found to be not not viable. And so normally they just filled, they just filter those out. But, but from the survivor perspective, it's that having your call dismissed as not being true is actually an indicator that the department could use some additional training. So they went back to the drawing board and re and refiltered the data to include all of the calls that they are that the engineers originally thought were not relevant. So I appreciate that perspective that you bring. And we have a few questions to that. Let's see the first one is in the first part of the talk, you provided many examples of successful pipelines, data engagement development and output type of things. Could you please talk to the timelines that these efforts take. How many weeks, months, years to these types of successful engagement projects take what is the quickest step, what step can take the longest. Can you also speak to any difficulties in getting engagement in the early stages. Dave, do you want to take that one. Certainly yeah I can start and I can speak based on the experiences of our project and our collaborators in a couple different examples. And one of my goals in publishing the results of our social science assessment of our own tool building process is to cut the learning curve for others. And our goal is to cut the time for development and the time for the integration of these different perspectives into the into your tool building process and so we certainly hope that our experience is instructive for others. So that they can develop tools more rapidly and follow the principles that have been derived from the literature and so we certainly hope to make things faster and faster. And in some cases than they've been for for our experiences. And so what I'll say is speaking specifically about water Sim. This is a process that we've been engaged in for a little under 10 years. And I would say that the first generation of the tool was critiqued on all three of the major criteria, and you have to have a thick skin, and you have to understand that the development of your of your tool. You need to present that essentially as a straw man for the first set and round of engagement. And that was our situation in that what we were really version of a tool sickly just a way for us to elicit the concerns of the stakeholders. And so the first version was a way for us to engage the stakeholders to have something to demonstrate to them, and then to find out that they said okay, here are the mismatches in terms of the salience for decision making. Here are our assessments of the data and analysis that you've included right. And it's helps us to understand what are, for example, institutionally accepted and endorsed data. And in some cases, we have included multiple different data sets in the tool, so that users could say well I want to run the model and look at this data set, which may be the most advanced science product. But I also want to run the tool and instead of forcing the model with this data set, let me force it with this other data set that may be limited, but is the data set that everyone uses. Right. And so those kinds of learning occur early in the in the process. And so I would say that it based on our early experiences which took on the order of years are more recent experiences are taking on the order of months. And so with our work we're currently creating some new decision support tools for understanding Water Food Energy Nexus in Arizona. And we've been able to employ the principles that have been developed through our research, and we're developing tools within a six month period that would have taken us two years, 10 years ago. I'll add to that. When I was working for the data intermediary New Orleans, we relied a lot on paper prototyping, which is, you know, a very low, low fidelity way to get feedback from your audience. Quickly also card sorting to understand users mental models of what cat, you know what constitutes a category of information what belongs with each other. And then we, when when I was working with the police departments, we would, we would host for our data dives. And that was a really easy way for the police departments to get early feedback on the data they were producing to build trust and accountability with their communities. And there it was a six week planning process to bring together diverse sort of, you know, diverse stakeholders with complicated relationships to each other. But the boundary object there was this draft data set. And I think some of our police departments even asked participants to sign an agreement that they wouldn't share the data because it was so, it was so draft, so drafty. So those are some of the ways that that we've tried to shorten that early engagement. And then currently in the work that we're doing with the World Resources Institute, the awesome tech team is building a machine learning model. But we're talking about a way to build a utilitarian front end something that might take just, you know, quick and dirty and sort of ugly. That would allow us to have a more informed conversation with stakeholders very early on to give them a glimpse into what the model can do. And I think the key there if you're building early prototypes is to keep to not let the design outpace the science. So if your science is still coming together, then your interface should not look smooth and finished. Having it look like it's also coming together sends the right signal to your stakeholders that their feedback is like now's the time to give it because it's still being made. Yeah, and I'll just add in other parts of the question here so our questioner also asks about difficulties and getting engagement in the early stages and how easy or difficult is it to get people invested before you have something really tangible to show them. And that is an interesting question and it's an interesting challenge. You know, when is the right time to engage the stakeholders in this process. And we have concluded that it's never too early, even if you don't have a prototype, as Denise mentioned, you can use analog approaches to soliciting their input. We've done it as I mentioned through just interview studies where we're asking people open ended questions that are helping us to understand how their mental models work of these issues. And so, even if you don't have an early stage prototype, beginning to elicit from stakeholders questions about their decision priorities and decision making processes. These can be critical. So when you're discussing an issue like climate risk adaptation, asking the targeted end users, well, tell me the three most important decision documents decision processes decision points in time that your organization or agency face in the next, you know, year to 18 months. And so it that way you can actually help to reconcile the timeline of your tool development and then the analysis to help pinpoint those decision time points and documents because I've seen many examples where the tool was not ready in time for when the decisions were being made. And that caused the mismatch and the inability to be used where just a simple set of questions could have shaved enough time off the process to make the tool much more salient to the decision makers. It looks like we have another question here. How do we improve engagement with the general public. We meet with key stakeholders in these decision making processes and tool building exercises. However, the general public is often very much disconnected. That's a great question. Thank you. What we have been done and is to developing a suite of tools that are all based on the same underlying logic and process and then developed different interfaces and different delivery mechanisms for the different audiences. And so we similarly focused for much of our effort on key decision makers in more recent years. What we've done for instance is we have a new partnership with the Smithsonian and the Smithsonian has something called the Museum on Main Street and they take Smithsonian quality exhibitions and they take those to rural communities around the United States. They asked us to participate in their most recent called waterways, which is currently touring the United States, and they had seen our presentation of water sim and asked us to develop a model that was geared toward the general public. We engaged a different community of designers. So we went to user experience museum studies science communication scholars and we said we want to maintain the credibility of the model. And so we want to have the same data and analysis that we use with our professional decision makers. But we want to create more of a gaming environment and we want to create an interface that is cleaner, more visually appealing and reduces the complexity of the model on the front end. And so we basically created a new front end that still runs the same back end. So the users get the exact same scientific credibility but they get a model that's more salient to them as museum visitors. And so it's all about adaptability and Denise talked about this in one of her case study of successful examples, which is, you know, divine designing the core elements that can be maintained, and then this fit for purpose and product in the sort of consulting model on that last phase. And I'll add one of my favorite domestically one of my favorite approaches is out of Chicago called the civic user testing group, or the cut group, and they have a methodology for matching members of the public with civic technologists and just local government sometimes it's boundary organizations, so that just regular folks with different demographics are testing your tools and giving you feedback on them. And so that's a that's a lightweight structured way to do the engagement early and often with diverse members of the public. And we have we have another question now from Bob Gradek. How do you learn if the tools you built have been used to influence policy or support decision making. Well, this is a tough one that I know all organizations are struggling with Dave I heard you have a list of the outcomes of your tools you want to talk about how you created that list. Yeah, absolutely. And so another area of academic literature that would be very useful for our community to engage with comes from studies of science policy. And this is a common question how do you understand what the impacts are and what the outcomes are, because we're interested of course not only in impacts on decision making, but impacts on outcomes like reducing climate risk enhancing water sustainability. And so there are a variety of process tracing techniques that have been developed within studies of policy science science policy to help us to determine the influence of our work on first on decisions and second on outcomes. So I'll start with decisions and and some of this is as simple as bibliometric analysis. And so what we do is we look at plans that have been developed. We look at reports that are being written by the organizations that we're partnering and we look for citations to our academic literature. We look for references to the engagement processes that that we've coordinated in our honest broker capacity. And so actually, you know, most decisions, particularly we tend to deal quite a bit with the public sector, and most decisions are documented. And in that documentation they provide a rationale and an evidence base for their decisions. And we simply look to those decisions and look for evidence of our work as having influenced those plans and policies. Secondly, we ask directly our decision makers. And so through questionnaires through interviews, we elicit the from those policymakers, when and where the work that we've been engaged with them has influenced their their policymaking and how it has done so. And so, for example, in one of our papers that we published we did extensive interviews with our decision makers and they listed a series of ways in which the engagement processes and the decision tools have affected their decisions. And that is really critical because we learned through that process that we were affecting their decisions in ways that we had not initially anticipated or designed our work to do. And we actually redesigned our work away from thinking of the tool as an operational decision support tool to thinking of the tool more effectively being used for exploration and decision exercises. And therefore the decision makers to engage in what if scenarios and test policy interventions, especially policy interventions that don't lend themselves easily to pilot programs or experimentation within the policy process. So you don't want as a policymaker to experiment, let's quadruple water rates and see what the effect is. That's not something that policymakers want to test out in a pilot project. There are some policies that they don't see as easily testable we can do that in a simulation environment. So learning this mutual learning about what policy options they would like in the model. They may not even be the policy options that are most presently being discussed, but they might be unusual policy options that they would like to test. So with these tracing techniques, we ask the decision makers directly, and then we look for effects on the outcomes within the system within the sustainability and we have to sometimes wait for significant time lags to be able to document those effects. One thing I've seen organizations do is host an annual or bi annual award for the best use of the tool or data model. And in the process of getting all those submissions you now have a treasure trove of use cases of how your data were used. So that's another savvy approach. We have another another question and this is, how do you continue to obtain adequate funding to maintain and enhance these tools. There's been a struggle I think for boundary organizations since the beginning of time. And that's where I'll answer first and then I'll turn it over to you Dave. I think there's some promise in this idea of the tool tool based tool enhanced consulting where you have you know a ready platform where you've made the the core investment perhaps with philanthropic funds, and you can customize your engagement with different parties to to give a higher touch and you know greater actionable information, and that might be a paying contract that you that you put together. But again, you know you've the core of the development has already been done with your data visualization platform and the and the development modules. Dave, do you have thoughts on that. Yes, this is a challenge. Absolutely. And it's a great question. And so, from our experience what we've attempted to do is to link the concerns of the stakeholders with compelling science questions. Right. And then we have solicited funding from the federal science agencies in particular in our case it's the National Science Foundation, but also know the EPA, other federal funding agencies are interested in supporting this work, as well as the philanthropic organizations and so for us it's looking for what are the compelling science questions that might be interesting to an agency like NSF or are Noah, and those might be around the computer science. They might be around the data visualization and the visual analytics. They might be around the social science of stakeholder engagement and user experience studies, wherever we can submit grant proposals to support that component of the work we we get a stream of funding that way. Secondly, we get support through the stakeholders that we're engaging with over the tool development. Oftentimes that is in kind support. So that would be staff support data sharing the ability to work within the specific problem context which often makes our grant proposals more compelling. Right. And thirdly, then we look to we look to philanthropic organizations, especially for the earlier question regarding the outreach to the general public through our K through 12 education and museum education programs. Those are often excellent opportunities to solicit funds from nonprofit and philanthropic organizations and we've had success with those as well. So we tend to look at it as a diversified portfolio of funding and with each or with each funding stream supporting a different component of the work that is most relevant to that particular funding agency. That's great. And then we have another question and I, Dave, I think you'll, you'll be a good person to answer this as well. What happens or could happen if a new stakeholder is found or comes forward and wants to be involved after a lot of initial work has already been done with this derail a project. I just briefly want to comment on the the concept of linear versus sort of circular development, and often we think about a product being done at some point. And these boundary tools are very rarely actually reach a point of being done. It's, it's a constant iterative process. So from the projects that I've worked on when you get feedback from a new stakeholder. That's, that's an opportunity to either, you know, put that feedback into the existing tool, or perhaps even fork the tool and create a custom, you know, cut a custom interface for the needs of that that new stakeholder. That's that's what we've done in the past. And Dave, I'd love to hear what your thoughts are on that. Yeah, we had exactly this experience in developing our tool in that, you know, in an initial design and in stakeholder engagement process. We had failed to really reach out to a full and diverse set of potential stakeholders and in, in our particular case, it was a Native American tribal communities within our region. The Native American tribes are extremely important, not only socially and culturally and economically, but also they are a major policy player within the water resource management domain within the Western United States in general and Arizona in particular. And at the, in recent decades, the Native American tribes have firmed up through negotiations through court settlements and through Supreme Court cases. They have firmed up their historical rights to water supplies and have thus become a very important part of the political process in allocation of resources across our region. Our initial efforts didn't adequately address the concerns of this particular community. And so that was one of the major critiques of the legitimacy of the initial tool. And so in subsequent revisions, we engaged in targeted outreach to this community and incorporated different policy options into the tool based on the concerns of that stakeholder. But Charlotte asks an excellent question because there are a wide range of perspectives and it is difficult to incorporate all of those perspectives effectively. So all what we'll do in the early stages of a process. So for example, in our food energy water nexus tool, what we've done is we've done a stakeholder analysis. And so we've identified the groups that are interested in the topic that have power and influence and that are affected by the decisions that are made by the policymakers in those different sectors. And then we've attempted to try to categorize those groups into different social groups and tried to address concerns that would cut across many of the different social groups and stakeholders that were engaging. And so we try to look at what are the concerns and how do we effectively develop a tool that meets as many of the broad concerns as we can rather than focusing on a single group and saying have we achieved legitimacy for this group. Thank you. That makes a lot of sense. And that that's the end of our questions. Those those were awesome questions. Thank you audience. And please note that today's webinar is the first of a series of three. The next will be on open development, the role of open data, open APIs and open source and advancing the field. The slides from today's webinar and a recording of the webinar will be made available in the resource security section of new america.org and we'll also send the recording and a write up by email to everyone who registered for the event. Special thanks to my collaborator Dave White at Arizona State University and to the team behind the scenes guys over here. On the phase zero project we have Elise Campbell and Rachel Zimmerman. Hey, so have a great rest of your day everyone and we look forward to continuing the conversation. Thanks. Thanks everyone.