 Thank you. So I'd like to start with reflecting on where we are in the climate challenge right now. 2015 and 2016 saw record-breaking temperatures at the global scale again. Those were the first years to exceed one degree Celsius temperature increase at the global scale with the first months of 2016 scraping 1.5. We're in a world where the impacts of climate change are widespread. We see them in almost every aspect of the human endeavor. But we're increasingly seeing progress in the realm of climate solutions as well. The last few years have seen slowing of the global increases in emissions. We've seen acceleration in the deployment of clean energy technologies. And we now have the first universal climate treaty, the Paris Agreement, which entered into force last November. As the world slowly tips into an era of climate solution, that tipping will be one with lots of fits and starts and muddling through. But we know that knowledge will be continuing to be an important foundation for actions in terms of understanding what the risks are, what our options are, how we can learn and adjust through time, how climate intersects with other priorities. So today I'll reflect on the process of assessment, where assessment is an organized effort to take stock of what we know and what we don't and make it useful for decision-making. I'll draw from examples from the Intergovernmental Panel on Climate Change and tie that to projects we have ongoing here at Stanford. This is a visualization of the process of assessment. Key feature in terms of climate change assessment is that no one paper is going to solve the climate challenge. If it were, it would be beyond Nobel Prize territory. So what's happening in assessment is that the experts who come together are taking stock of really diverse evidence on any topic, pulling from different disciplines, different modes of analysis, different ways of viewing the world. And most topics related to climate change don't just span different disciplines. They step from science into society. So for assessment to be effective and to have influence, the participation of decision-makers in almost every aspect of this process is essential. A final lesson that we've seen in assessment activities happening over many decades now on climate and far beyond is that if the products are to have influence, make a difference out there in the world, this whole process is as important as the final products themselves. Through the course of the presentation, I'll dive into a few different aspects of this assessment process. First, highlighted in red here, how to think about the integration of evidence that is inherent in assessment and how expert judgment is applied in that process. The basic idea is that no numerical model-based result is a prognostication of the future by definition. And to figure out how it's relevant to decision-making, we need to pair those quantitative results with more holistic backdrops of understanding. A second key aspect of assessment is that it can unfold possible futures, worlds people might want or worlds they might get and need to deal with, and how those futures intersect with ongoing decisions and actions. And finally, there are the interactions that happen between experts and decision-makers in the assessment process at the very start, figuring out what questions knowledge might hope to be able to answer through to the finalization of the assessment products themselves. Through the course of the presentation, I'll draw examples from the Intergovernmental Panel on Climate Change in particular, so I'll just take a few moments to introduce it. You can think of the IPCC as a grand partnership between the governments of the world and the scientists of the world. The governments essentially say, you scientists, if you follow our rules, we will take your evaluation to be a definitive characterization of what we know right now and what we don't. In terms of those rules, you can think of them following in four broad categories. Sorry, I just realized these slides are exceptionally stretched out. Sorry about that. There are four broad ways that you can think about the IPCC rules unfolding. First of all, there's the mandate to be comprehensive. Experts aren't just representing their own knowledge in the process, their own results. They're taking stock of all of the knowledge that exists. For the Working Group 2 report we led out of Stanford, that meant there were nearly 15,000 publications in the assessment. Second, the major way that this comprehensiveness is achieved is through multiple rounds of monitored scientific review where anyone from around the world can submit comments on the reports. That meant that there were 50,000 review comments on the report we developed here. Third, the most unique part of the IPCC process is this line-by-line government approval, a UN-style session where scientists and governments work together for each sentence in the summary documents, determining if that's the most accurate, clearest articulation of what we know on that subject right now. Finally, all IPCC authors have tattooed on their eyeballs this mandate to be policy relevant but policy neutral. In terms of how things get really exciting in these government approvals, I would argue that policy neutrality is an impossibility when it comes to assessing climate responses, but nonetheless, that's the goal. The IPCC has been unfolding for decades now. The first report was released in 1990, stepping through to the present. The fifth assessment report came out in 2013 and 2014. The work is organized in three working groups, the first one looking at the physical science basis, the second one impacts adaptation and vulnerability, and the third one mitigation. You can think of this IPCC assessment process as representing a treasure trove of experiences in assessment. In terms of some of the things we learned in the fifth assessment report, it's the way you can quite easily create a media snowball around the so-called global warming slowdown that might not have actually happened. Pushing science into the limelight very rapidly carries risks as well as opportunities. But we also saw ways that the IPCC assessment in this last report underpinned key aspects of the Paris Agreement. For example, the structured expert dialogue that unfolded over years, considering the long-term temperature goal and evidence towards it. So I'll organize the presentation in three broad chapters. First, considering the integration of evidence that happens in assessment. Second, considering the ways that expert judgment can be applied through defined frameworks. And finally, considering that contested territory of the line-by-line government approval process that happens in the IPCC. In each of these chapters, I'll consider what has happened in the IPCC, strengths and limits what we can learn from it moving forward. And then I'll jump to Stanford and reveal some of the ways we've been pushing these different dimensions of assessment further in research here on campus. So starting with the integration of evidence. I'll particularly describe the way we use key risks and reasons for concern as two integrators in the IPCC fifth assessment report. These integrators were really important for shaping understanding across sectors, across regions, at the global scale of risk in a changing climate. These two approaches underpin the high-level statements about how risks evolve with continued high emissions of heat-trapping gases. These were statements like increasing magnitudes of warming, increase the likelihood of impacts that are severe, pervasive, and in some cases irreversible. This key risk and reasons for concern approach could extend across the hazard spectrum, starting with things like extreme events, the sharp end of the climate system where we often see some of the most profound damages economically and also in terms of human life. But the key risks and reasons for concern approach could also extend into more abstract dimensions of climate change risk. For example, the potential for large amounts of sea-level rise unfolding over centuries. So why is integrating evidence hard in assessment? To emphasize why it's hard, I'm going to draw on one example, and in particular sea-level rise. What this plot shows is the IPCC assessment of sea-level rise starting with the 1990 report extending to the fifth assessment report published in 2013 and 2014. In each of these bars, we see an expert assessment for the amount of sea-level rise that could occur in year 2100 compared to a variety of late 20th century baselines. Something interesting happened in the fourth assessment report, the 2007 report. If you asked experts at this time what did they think was happening with the upper estimate of sea-level rise that could occur, they probably would have put the number above the 2001 assessment. But instead, this is the number that was reported in the assessment process. This is a confusing moment. Risk was actually higher, but they reported this lower estimate. So what happened? At this time, there was an increase in our understanding of ice sheet loss, the dynamics of the ice sheets, Greenland and Antarctica. And what became clear all of a sudden through remote sensing, on the ground observations, the dramatic collapse of Larson B in the Antarctic Peninsula, was that sea-level rise was actually happening faster than the scientists could explain on the basis of their process-based models. They knew they weren't able to determine the processes that were leading to ice sheet loss at that moment. So what did they do in the assessment? They decided to report the numbers they understood really well and leave out dynamics of ice sheets. This essentially was, as one ethnographic study of the author teams, the author's deciding if you don't know what the number is, you have no business giving it, classically Type 1 error version that you see in these types of assessment processes. So this is a classic challenge in assessment. What do you do when, by definition, numerical model-based results aren't a prognostication of the future? In parallel to the sea-level rise assessment we've seen through time something very different, reasons for concern. And what I'll build in this section is how this type of integrative framework provides a complement to the quantitative numerical-based result. And in this framework, the authors could discuss and evaluate the high risk of ice sheet loss, the potential for large magnitudes of sea-level rise that would be associated with extreme societal vulnerability and exposure and potential irreversibility. These reasons for concern built on the key risks and both of them were underpinned by a focus on risk. Increasingly there is recognition that responding to the climate challenge is very much a challenge in understanding, managing and reducing risks. The basic idea here is that we're never going to be in the position where we can say exactly what the impacts of climate change are at any one time in any one place but we know that the odds are shifting. We defined risk in the AR5 really broadly to encompass different definitions of risk. Risk was defined as the potential for consequences where something of value is at stake and the outcome is uncertain. You can think of risk of climate change impacts as emerging from the overlap of three different factors. First you need the hazards, the physical trigger from the climate system, whether that's a heat wave, a drought or a flood. But the climate system alone is not enough to give you impacts that matter. It's how those hazards intersect with the vulnerability and exposure of people and ecosystems. You can think of this as being equivalent to probability times consequences. This focus on risk is helpful for a variety of different reasons. First of all it emphasizes that many of the aspects of what's at stake in a changing climate, both in terms of the risks of impacts but also the risks of responses comes down to complex interactions. These are interactions where some of the things that are hardest to deal with will remain at the periphery of our analytical capabilities. Second this focus on risk help ties our current experiences with the climate system to how those outcomes may change into the future. Here in California for example we've thought a lot about our ability to deal with drought or how to deal with heavy rain events following an intense period of drought. Third a focus on risk helps emphasize that low probability outcomes matter. When we get in a car we feel comfortable buckling our seatbelt not because we will get an accident but because that is a low probability outcome that if it were to occur it would carry high consequences. And finally this focus on risk is helpful because it really emphasizes that the entirety of the human experience is about decision making and under uncertainty not just our climate responses and there are many different tools that we can borrow from security and insurance and business and law and apply them to the climate challenge as well. So this is the focus on risk. The key risk layer was then a series of criteria that could be applied in the assessment to integrate evidence. These were criteria really aiming to identify outcomes most deserving of society's attention in a change in climate. These were particularly targeting severe impacts relevant to dangerous climate change where the ultimate objective of all of the international climate negotiations is to avoid dangerous anthropogenic interference. These key risk criteria were for things like the importance of the affected systems, the potential for irreversibility, limits to responses. And the key risks were then evaluated through time. What this shows here in the black squiggly line is the warming that has occurred to date, that one degree Celsius of warming and projected warming for a scenario of continued high emissions that could take us to over four degrees temperature increase in this century as compared to a scenario of very ambitious mitigation which could keep us under two degrees Celsius with a good chance. And in terms of time frames for responses, what we see is that over the next few decades even if we're incredibly ambitious about mitigation, reducing our emissions of heat trapping gases, there's warming that is baked into the climate system to which we all need to adapt or to suffer as John Holdren likes to say. At the same time, the choices we make now about mitigation are pivotal for shaping the amount of warming that happens in the long-term, a longer-term era of climate options. The key risk assessment considered how risks evolve through these different time frames stepping from the present to the near-term with that baked in warming to the long-term with those climate options, considering risk levels with a continuation of current levels of adaptation and also risk levels with substantial investments in adaptation. Recognizing that when we think about how much we could prepare for impacts into the future we have to grapple with many different kinds of constraints. What will the access to financial resources be? What will the limits to thermal tolerance be, especially when humidity is changing simultaneously? We also have to consider the ways that the response portfolio may expand as people will become better at moving assets or economic enterprises. The key risk assessment looked at 142 key risks across sectors, water, ecosystems, urban and rural areas, livelihoods and poverty, conflict, across all regions of the world, from Africa to small islands, Arctic, North America. And what this really helped emphasize was that there are a lot of commonalities in terms of what's at stake in a changing climate. These are things like risks of food and water insecurity, risks coming from extreme events, risks for biodiversity and ecosystem loss. But the particular particularities that emerge in different contexts provide openings for responses. These key risks were then assembled together to support the global scale assessment of five different reasons for concern. This version of the reasons for concern figure is not the one we published in the IPCC report. Instead in this version we've lifted up the hood and shown how those 142 key risks supported a global scale assessment of risk in a changing climate. Where these icons on the embers are individual key risks and these bubble gum drops here represent eight different aggregations of the remaining key risks. So now what are the reasons for concern? There are five global scale reasons for concern because values matter. There's no one way to measure what's at stake in a changing climate. Different people put different emphasis on outcomes for the present versus the future, the rich versus the poor, things that you can put in monetary terms versus things that you can't. Across the reasons for concern, whether they have to do with geographically confined systems like coral reefs or the unfairness factor, the unevenness of impacts or the potential for game changers, we see that risks go up substantially to those levels of severe, pervasive and potentially irreversible outcomes at the global scale with continued high emissions in this century. But in terms of some of the lessons for key risks, what we see also is that there are relatively few key risks that appear on the embers. There are relatively few key risks where we can say here's where we go from noticing impacts to severe impacts. Or here's where we go from severe impacts to impacts to which we really cannot adapt. This has a lot of implications for in turn how we think about risk management. Do we adopt a wait and see approach? Maybe things will be fine with the West Antarctic. Or are we precautionary? Are we cognizant of the fact that we don't know exactly where different thresholds lie? So this is a global scale integration framework developed on the basis of multiple criteria. And what I'd like to do now is make a first jump from global scale assessment to Stanford. And what I'll emphasize is the way that multi criteria frameworks for integration can be helpful not just at this global scale, but also in terms of more specific context. So in work led here at Stanford by Miyuki Hino, a second year PhD student in EIPER, she's taken stock of management retreat. Where management retreat is the strategic relocation of people or assets or the abandonment of land to manage natural hazard risk. Management retreat is a controversial adaptation option, mostly because people for example don't want to leave their homes until they think their lives are in danger and then they really want out of there, but the government resources may not be available. There are 27 well documented cases with management retreat to date. They're trying to relocate for decades with extremely little success to buy backs from FEMA post disaster. And what Miyuki did was a multi criteria evaluation of these 27 ongoing or completed efforts to move that have relocated 1.3 million people to date. And on that basis she developed a conceptual model. Where this is a conceptual model like the reasons for concern that almost by definition in being a conceptual model it's a gross simplification. That nonetheless is really helpful for understanding the full landscape, for understanding some of the aspects that fall between the cracks of individual lines of evidence. So in this conceptual model it's oriented towards management retreat as a negotiation where you have residents who either are initiating the move or they're not. They view the risks as tolerable or intolerable at the start. And they're in interaction with an implementing party, typically a government. And what we see here is that for something like management retreat where there's this contested version or understanding of whether people want to move or not, there's a high upfront cost and then it's very cheap after that. Our evaluations of physical risk or the economic costs are not necessarily a very good predictor of whether management retreat ever makes it onto the table or if it does it's likelihood of success. But viewing management retreat as a negotiation through this conceptual model you have a better understanding of the likelihood of implementation. For example these self reliant Alaskan villages or small islands in the Pacific that have had a very hard time implementing retreat. As compared to circumstances of mutual agreements like those FEMA buyouts where alignment of interest increases the likelihood of an outcome occurring. And then over here in terms of mandatory resettlement where we've seen the most people move to date but there's also potential for debated outcomes if the incentivization doesn't happen well in implementation. Okay that's chapter one. In chapter two which is much shorter I'll consider the process of expert judgment in assessment. Why is this important? As scientists we're rarely in a circumstance where we can say we know the outcome with a hundred percent certainty. We use lots of fudge words like something may happen it is likely it is possible it's not inconceivable etc. But in assessment what you really want to do is come up with conclusions that can be compared across very different disciplines very different topics. So what the IPCC has done over the decades is use different categorizations to communicate the level of expert judgment around findings. In the third assessment report it was the first assessment report with guidance that was applied consistently across all of the working groups. That guidance was developed by Steve Schneider and Richard Moss. Stepping forward to the fourth assessment report that guidance was revised. So what did this 2007 report look like? Here we see the three working groups for the physical science basis impact, adaptation, vulnerability and mitigation and then the synthesis of them. And this is the uses of different scales for characterizing expert judgment. Working group one used a lot of likelihood describing the probabilities of different outcomes. Working group two used a little bit of likelihood but a whole lot of confidence which was a lighter probabilistic scale that drew back to that Schneider and Moss guidance. And that Schneider and Moss guidance had described expert judgment as a process of Bayesian updating where these are subjective probability judgments. Working group one wanted nothing to do with subjective probability adjustment so they stuck with likelihood whereas working group two used this confusing mixture of both of them. And then finally working group three studying mitigation said behavior is fundamentally different from thermodynamics and there's no way we're going to assign probabilities to technology development through time, the likelihood of different policies unfolding, governments collapsing, etc. So what you had at this time in the IPCC was an academically interesting mixture of different approaches lining up with different types of evidence but it was super confusing for readers of these reports. So at the start of the fifth assessment report cycle in an effort led by Mike Machandrea now at Carnegie Science, we revised the guidance. And in particular we clarified the relationships between these three different scales emphasizing that evidence and agreement was always the base on which confidence in a qualitative scale could be used to characterize findings and finally where possible there was likelihood. The likelihood of different outcomes occurring. And this works. All working groups used all three scales and finally there wasn't confusion about what's confidence, what's likelihood and there was a much better match between the nature of evidence in each working group and what scale was applied. But I think what's interesting also in reflecting on how this worked was that it was a big step forward for the IPCC but it also was a complex compromise framework. We started into this process with every working group saying we will not abandon our favorite scale but we're willing to work from there and you can actually imagine assessment moving into the future where IP best or the national climate assessment has picked pieces of the IPCC scale that you could actually push this towards a more simple more rigorous framework by collapsing those three likelihood three scales potentially into two potentially even into one. So in this realm of expert judgment moving forward I think we see ways that frameworks for evaluation will continue to be really important and the sweet spot is aiming for something that is both simple and rigorous easy for experts to understand easy for readers to interpret. A realm of assessment that's received relatively little attention in particular in the IPCC context but even elsewhere is that individual expert judgment is distinct from collective expert judgment group deliberations. What the IPCC process has represented is essentially collective expert judgment and we've had very good evidence for decades not just in terms of experts broadly but in terms of experts in the IPCC that when you put experts around the table you tend to get both overconfidence and conservative judgments. You're really sure it's not going to be as bad as you might think it is if you were to elicit experts in an individual sense. This type of dimension is actually really important because you get a lot from those collective group deliberations but you don't want them to overly prescribe the range of possible futures and I think there are a lot of opportunities in the IPCC and outside for combining the best of individual expert judgment. And finally communication is the ultimate goal here and it's really easy for very complex frameworks of expert judgment to push towards very opaque overly decorated statements and that's in some ways the key tension to grapple with. Okay, in the second chapter I'm again going to make a jump to Stanford and I'm going to make a jump to Stanford in a part of assessment where the IPCC process in some ways punted on expert judgment in the last round. This is a schematic visualization of results from integrated assessment models reported in the working group 3 report. And what they show in the black, well first of all here you've got emissions to date. And then what they show in this black line is the median estimate across scenarios for scenarios that are likely to keep warming below two degrees Celsius. Given the sheer carbon budget math in some ways the most breathtaking aspect is that they reach net zero emissions at the global scale in the second half of the century. But in the IPCC reporting which tended to be this black line also what you missed was the degree to which there were direct emissions that remained and the gross negative emissions were substantially greater than these net negative emissions. What happened here in these stylized economic models is that there was a huge reliance on negative emissions technologies in particular biomass energy compared with carbon capture and storage. This huge reliance essentially doubled the carbon budget that remains and made it cheaper and easier within the model environment to keep warming below two degrees Celsius. There's been a lot of attention to the expert judgment that happened here where the authors have said it was a lot easier just to report these numbers as compared to really making the judgments in terms of what they entail, essentially the can-kicking ethics if we plan on that amount of negative emissions that we're now. So what we've been doing in a series of projects is really right sizing our expectations around carbon dioxide removal. The way that there are stewardship-based approaches whether it's improving our management of forests or restoring our coastal environments that tie tightly to co-benefits that can be deployed in the near term but where we probably don't want to bet on there being 12 gigatons of carbon dioxide removal at the end of the century. We've been looking at the potential for biomass energy paired with carbon capture and storage and again the way that there are easy near term next steps that are cheap in many options but again where we don't necessarily want to bet on there being 12 gigatons of carbon dioxide removal at the end of the century and finally thinking about things like direct air capture where that RD&D agenda is incredibly important in the near term where they may play a big role in the long term but again where we don't necessarily know when we might be able to plan on 12 gigatons of carbon dioxide removal as we saw in those IPCC results. In this last part of the talk I'll focus in on science policy interactions that happen in the IPCC in particular the interactions that happen in that government approval of the IPCC summaries for policy makers. These policy maker approval sessions are UN style sessions where the scientists are on the podium, the co-chairs of the working groups are leading the session and the governments are out there in a sea of government flags there are 195 governments in total and usually about 130 or more of them show up. You put sentence number one from the summary for policy maker up onto the board and it goes open for comment and any government can raise a question a concern pressure test the moment if things are really getting hairy and you discuss that sentence until there's agreement in the room that is the clearest most accurate articulation of what we know on that topic right now. It's a highly formalized interaction there are rules of the game there are processes like contact groups if you can't get to consensus in the full auditorium. There are things like informal huddles here for example where we see at the very center of this circle Brazil South Africa Switzerland US and Ireland and peeking in from the shoulders we see some of the scientists like Rob Stevens a very prominent distinguished economist from the Harvard Kennedy school who's going to reappear in a moment. These have been very challenging processes through time but they also have been essential for the ownership of science the influence of the IPCC process. Jumping back to 2014 the working group three approval looking at mitigation of climate change right in the run up to the Paris agreement met with a lot of challenge in this approval process and in fact there are some substantial failures of the process. Consensus was not achieved for large parts of that document. In the media just after there was a lot to do about what happened in that approval plenary for example a series in science asked did the summary for policy makers become a summary by policy makers. Rob Stevens in a blog in a series of letters back and forth asked is the IPCC government approval process broken. Other prominent economists also described in the media said that they were still shaking and depressed personally from this episode. Just two weeks earlier we'd had our working group two approval and it seemed like some of the working group two authors by contrast had drunk the Kool-Aid. John Barnett a human geographer said I'm awestruck I've never seen anything like it and I doubt I ever will and I know the full context isn't here but his connotation was positive in the statement. The working group two co-chairs pushed back and said this is not about scientific prowess it's about navigating a social process and if you want to guarantee failure you should say things like I'm a scientist and therefore I'm right governments will really show you who's in charge of this process. Stepping back into time Steve Schneider's memoir in 2009 was entitled science as a contact sport where the contact sport was this approval process. Stepping back to 2001 similarly there were questions about whether these processes were consensus science or consensus politics. So despite all of the attention to these invigorating and grueling and unique government approvals no one had actually ever evaluated how documents change as they go through government review and approval. So that was what we did and we found not surprisingly that the summaries for policy makers always get longer as they go through government approval but they don't always get longer in every single part and so what this figure shows is policy makers for the 2007 report and the 2013-2014 report across the three working groups. What happened to paragraphs, how many words were gained in paragraphs that expanded or were added paragraphs in yellow here that were rearranged and in red paragraphs that contracted or were deleted entirely. There are two approval plenaries here that really stand out for having lost material. This is the working group two 2007 approval and that working group three oops, that working group three 2014 approval. In the working group three approval here there were ten figures showing emissions of countries categorizing countries by income groups or regions, ten figure panels that were dropped. Rob Staven's section on international cooperation met with a total failure of international cooperation and was reduced to 33% of its initial length it looked like a balloon had just been dropped. But also we see ways that you can make comparisons. We brought Rob Staven's box back in the synthesis report and it made it through in its entirety. In the synthesis report approval we lost the box on dangerous climate change, the ultimate objective of the international climate negotiations that we were able to get through in the working group two approval in a much different form. Granted we made the slightly challenging choice here of getting that box through by driving it to 4 AM on night number five day process. So what we see is that this is a process where there will always be political sensitivity and there can be failures of consensus around that but that in many ways it's individuals navigating a very complex science policy domain where the outcome is not really a one-to-one correlation with the topic. I won't show the database versions of the subsequent conclusions but here just to recap that last slide we saw that the documents almost always get longer unless you meet with a consensus failure. We also looked at the ways that revisions differ when scientists work with other scientists as compared to scientists working with decision makers. One thing that was not that surprising was that there aren't actually all that many changes that happen in these in-person approval plenaries when scientists are working with decision makers probably because you can spend hours on any given sentence but what was interesting was that scientists working with scientists tend to focus on accuracy and whether the wording was crystal clear. When scientists were working with decision makers the mode of revision shifted heavily towards putting in examples that explain the high level abstract conclusions. In a third suite of analyses we looked at the readability of documents. There was a recent study that announced with great fanfare that IPCC policy makers summaries are really hard to read compared to tabloid newspapers. No duh but you know Washington Post picked this up so what we did was look at a variety of readability metrics metrics that span from what are commonly used for children's textbooks through to metrics that get at harder level reading and we compared the summary documents to more appropriate reference texts in particular climate change analysis documents that have been heavily worked over by science writers and what we found is that these documents are definitely for grownups. They don't pass that super easy reading metric but they're probably readable for policy makers and that they are about as readable as the best of what you can get by embedding science writers in the process with the exception of the fact that they had more jargon than they ideally should have. In terms of how readability approved through approval process there were some interesting dimensions in terms of what you would expect to make documents more understandable to non-scientists. Finally I think we have clear understanding that this will remain very contested and challenging science policy territory but that is territory that adds value to the scientific landscape. So I will close by saying this IPCC process is a very formalized science policy interaction and in my final jump back to Stanford I'll describe a series of efforts we have underway that are more about the informal interactions that can happen in science policy spaces here in California in particular looking at the role of land in the state's actions towards its 2030 goals. The projects under this natural lands effort are diverse spanning from how drought is affecting redwoods in the state and has historically in May into the future. Looking at things like different management interventions that can be applied to increase carbon stored in soil and by how much. And in terms of the one I'll mention in a bit further detail the first one that's come through to conclusion evaluating the state's carbon offset program for forests. In 2013 California put into place an offset protocol that participates as part of the carbon market with projects running back to 2001 39 projects in total. Offsets present a lot of really important questions. For example to what degree should we enable them to substitute for direct decarbonization of our energy system? To what degree should we allow them to substitute when they provide benefits elsewhere as compared to the air quality benefits that can come from reducing our emissions here in the state? Offsets also raised questions in terms of how we guarantee that when you put money into the program you're getting emissions reductions equivalents that wouldn't have happened otherwise and that those are there for the long call. So what Krista Anderson has done a third year PhD student in EIPER is evaluate this program in real time as it's getting underway. And what she's found is that as a first benefit the program is relatively small. It's not taking the eye off the ball in terms of the overall carbon market. She's evaluated a suite of different metrics for considering additionality. Who would you expect to be the forest owners? How would you expect their behavior to be changing through the implementation of this project? Really understanding ways that we see a diverse suite of indicators for additionality. And perhaps most interestingly she's looked at the way that the conservation paradigm has been inverted in these forest offset programs. Land would not be conservation oriented in many different circumstances are taking place in sustainable forest management for example for the climate benefits to get the forest offset program participation but that's yielding a whole suite of co-benefits related to conservation for example in terms of water quality protection, biodiversity or recreation. So in terms of the territory I've covered I've described assessment these organized efforts to make sense of what we know and what we don't know is relevant to decision making. Many of the aspects here in terms of integrating evidence, applying expert judgment, exploring possible futures and interacting with decision makers are the challenges of assessment but they're also the opportunities. And what we're working on in the suite of different projects moving forward is how to try some different approaches and to evaluate them in real time. And with that I would be happy to take any questions. Katherine thank you for giving us an insight on how the AC and how the process works and the outcomes that you just presented. So as tradition goes we'll open it up for questions to students first and then and then there are others. Any questions from students? Yes. Hi my name is Emma Hutchinson I'm an undergrad here. I was just wondering about your opinion that we're kind of in this post-truth era especially in this country where a lot of the citizens and unfortunately a lot of the policy makers don't really kind of take this science as fact and that knowledge doesn't have the same kind of role. So what's your opinion on you know based on this kind of cultural phenomenon that's happening how might that influence the IPCC's science policy process as well as the communication coming out of that. Might that influence at all how their trying to navigate that? Really good question. I mean in terms of how the current US moment could affect the IPCC funding is not a relevant question even beyond the cultural dimensions. I think one thing that was really interesting participating in the IPCC was that growing up in the US I was surrounded by climate change skepticism in almost every aspect of the climate challenge and what I was learning about. Stepping to the global stage what's fascinating is that skepticism is exceedingly Anglo. It's US, Canada, UK, Australia. Those tend to be the dominant global media and that's actually a really big challenge. But aside from that we see very strong understanding that climate change is happening, that there are opportunities in the responses and that ranges from China to all of Africa very very prominent on the global stage. I think that would be the hopeful thing to turn to. Here in the US we have seen profound ways that knowledge has been pivotal to our economies, to our jobs, to our national security back and I think as individuals and as scientists our current moment raises a lot of questions about how scientists should engage when that role of knowledge in society is threatened. I think assessment still plays a really important role in that when I have skeptics in the audience or when you're really trying to say what do we know right now and what we don't having a process where experts have come together 800 in total and really said here's what we know has happened that type of thing ends up being really important as compared to individual papers that are cherry-picked here and there. Great question. Any other students? Questions? Students? Alright, going going gone beyond the students now. Alright. So one of the criticisms of IPCC I've seen for quite a while is that not only are they very bureaucratic and clinical but they pick scientific approaches or areas to study and authenticate on that aren't necessarily really the most important. For instance CO2 management is definitely important but one of the consequences of poor management will be methane emissions which are a farm and ocean acidification has only recently been taken up to my knowledge by IPCC groups and that's a 2030 extinction event. Who cares what the temperature of the earth is 2100 in the air when the oceans are dead by 2050? 23rd. So how do you how do we get IPCC to actually work as if it's not just a technical junket around the world every year or two and actually dealing seriously as a problem? Okay so the question is to recap it briefly was that IPCC is really bureaucratic how does that limit things in particular how do you think about the range of topics it can assess it's over emphasized negative emissions as I discussed and has it addressed ocean acidification and methane and could it become more relevant to issues around the world? I would say as a first starting point is that every assessment body has its strengths and has its limits essentially that's been characterized by the assessment space of how do you think about getting the science right making sure it's a credible scientific project process how do you get relevance right making sure it answers questions people care about and how do you create a legitimate fair process so that people who aren't participating can at least look at the process and say I believe that process should provide a good outcome a really classic thing is that you have trade-offs across those three different factors so the great strength of the IPCC is that you have this governmental ownership that's also by far its limit it cannot radically change its procedures through time because changing procedures requires consensus agreement of all the governments that said I think it's actually fairly good at getting a hold of what issues are relevant at any one time for example for ocean acidification indeed there's been an exponential increase in the amount of literature available over the last decade that's been corresponded to in the IPCC with an expert meeting on ocean acidification two chapters on ocean acidification impact in the last report and now a special report on oceans in the cryosphere things like methane have been treated across working groups and you can point to a lot of failures in the process but one of them perhaps is the fact that in a very comprehensive assessment little individual pieces get lost you can think about emphasizing those in special reports and you can also think about many different bodies that end up tag teaming onto the IPCC for example carrying that oceans assessment forward into science papers into European collaborations very broadly into special report requests coming back from governments Any other questions? Yep So many questions You mentioned 12 gigatons by the end of the year of urban dioxide removing was that a total sum of 12 gigatons because we're putting 10 gigatons per year into the atmosphere so removing 12 by the end of the century ain't going to have much of an effect but you talked about the fairness process of the IPCC which is really interesting and I appreciate how complicated it is to get a world government and all these scientists who all have different perspectives together but fossil fuel companies don't need to get together they can go and pollute the atmosphere at their will because no one can stop them so you're using the process of fairness to fight against a process that does what it wants to make the money it needs and I don't know how to resolve those but that's a big issue Okay, to recap the questions the first one was how do you understand those gigatons numbers I was referring to they got a bit confusing Number two, how do you think about something like the IPCC which is all about a fair process where scientists are really airing on the side of being conservative and by contrast industry is not going to have those same constraints So for the first one global emissions at present are 22 gigatons of carbon dioxide from land and industrial together and the results from the integrated assessment models are pushing towards 12 gigatons of CO2 removal by the end of the century so indeed that's a quarter of current emissions heading downward in the opposite direction and for this question of how do you think about the range of stakeholders relevant in a changing climate, indeed there's a whole lot of them and I actually think there are substantial opportunities for increasing the collaborations between science and industry, I think we've seen that here at Stanford through the GSET program which has been incredibly important in creating understanding across boundaries and you could even think of that unfolding in assessment processes more broadly, risky business was a first assessment effort in that direction and I think it could go a lot further Can I ask you a question, this is something we were chatting about before the seminar if you step back from the IPCC which I think are extremely valuable but look at this sort of societal feedback in terms of solutions and ask the question could we are there some research goals that we can create out of the risks that you see and for example could you change agriculture or photosynthesis in some way to reduce the carbon emission or absorb more carbon using photosynthesis do you think you can have being involved in the IPCC and now stepping back is there room for something else to be done in addition to IPCC to provide some of the R&D goals or other goals a huge yes in response to that, I think if you were to say what is the nature of IPCC assessment been in the mitigation space to date historically it has been incredibly economics focused oriented in its application surveying and the literature there's been relatively little in terms of engineers engaging in that process and I think that's part of the reason why we've been spending all this time talking about negative emissions emerging from stylized economic models where what you really want is that intersect of economists talking with engineers talking with people who are in the commercial space, people who are at that very early technology readiness level to figure out what does that mean for building things from the bottom up at the same time that we need to put our sites on the long term horizon where we might actually get and what that implies for policy needs in the near term I'm curious about is the incentive structure for scientists to work on and what is IPCC selecting for what is the scientists themselves selecting for, are they self-selecting and if so what is the incentive structure because all the faculty members I know are super busy and so I imagine this is a fairly time consuming process and so you know I understand some of them won a Nobel Prize so that's a good incentive but they didn't know that was going to happen so I guess I'm just curious to hear. Social science of IPCC. Yeah okay that's a really really wonderful question so this is a very time intensive process and I always would say that I had the best part of the deal in that I was one of the few people who was actually paid to work on these projects as a staff scientist, science professional in the process everyone else was doing it on top of their day jobs this meant that people were indeed sending emails at midnight or over the weekends and this is not necessarily something that is easily added into the landscape. We've done a lot of interviews of authors who have done these reports again and again and again and the ones who come back time and time again say they come back for the people interactions the knowledge creation that happens when you bring experts together the bonds that happen the way that this really transforms understanding of science and makes it relevant for societal decision making. That said in terms of scientists here at Stanford who participated in the process we've talked a lot about the inefficiencies in the process and something like the IPCC process is larded with inefficiencies that government approval the big strength of the IPCC process comes with all sorts of trade-offs so some people participating in the process get a postdoc or a grad student assistant that is paid for by their government and that really helps with the process and others don't. Some of the best IPCC authors were these few people in remote countries who are kind of exiled from the main scientific endeavor who are really bored with their day jobs and moving forward I think there's some really big questions about how do you make it worth it for scientists how do you provide them with adequate support and in addition what the IPCC process does is it puts scientists in charge of the process of assessment in a way that is actually not the best set of lessons that come from the expert elicitation the decision support space where you actually also want a lot more experts in the assessment process to help make sure that you don't get big inefficiencies in the group dynamics in particular. One last question yes Approval portion of this and could you comment on the role of individual players for a country not a country's position perhaps but maybe there's a small country with someone who is particularly out there or another country that just person decides they want to drag their feet or things of that sort how does that work and how do you assess it. Yeah really really good question so in the question in case everyone couldn't hear is what's the role of individuals versus governments in these government delegations so first of all every government delegate participating in the process says they are representing the position of their country and they'll even do things like they're walking out of the room calling home to get approval what I know from insider knowledge is that they love to play that up because it makes individual seem more important and the degree to which people are there on autopilot varies quite a bit although they are usually arriving there with for example in the US context 50 to 100 experts from the US having reviewed the report and provided the talking points that the individual there is needing to make the call on the fly. Another big inequality that emerges across the government delegates is that the really big countries or the countries that are really invested in this process will send 10 delegates they can monitor everything simultaneously when the sessions go through the night they take shifts and they actually are well rested whereas a lot of the small countries send one person who's there representing the whole report with really no ability once it goes into separate breaker groups to cover all of that so those types of interactions are really relevant part of what you get is different countries band together as really happens through all of the climate negotiations so all of the small island developing states all of Africa will come together and have positions it's also how you get things called ambushes where you can have an entire grouping of countries deciding that they've got the same issue at the same time and they really can pressure test the process. Well thank you so much for doing what you're doing it's an incredibly important job let's thank Catherine for your time. Thank you.