 Great. So, in broadest terms, I'm interested in how knowledge can inform or shape decisions. And over the last bunch of years here at Stanford, I've been focused on integrative assessment of risks and options in a changing climate. So, assessment means a lot of different things. And in particular, what I mean is assessment as an organized process for figuring out what we know overall on issues relevant to decision making. Anyways, the backdrop for assessment is our current moment in the climate challenge. One degree Celsius warming with impacts that are widespread, at the same time that solutions are building momentum. We have our global emissions have slowed, clean energy deployments have accelerated. We have a universal climate treaty, the Paris Agreement. But that said, we have also clear understanding that our ambition in terms of climate change responses is at odds with the current state of action. Nothing like Harvey, Irma, Katya, Jose to remind us that we're not very well prepared for the climate risk we currently experience. So that's where assessment comes in. It can help shape understanding of the risks, the options, and how we can learn and adjust through time. So what I'll do over the course of the next half hour is reflect on assessments in a few different contexts. I'll draw examples from the Intergovernmental Panel on Climate Change, which is this global assessment body that's been really important in the science policy climate landscape over the last decades. But I'll also talk a lot about work that we have going on here. Basically, taking different approaches to assessment, innovating new ideas, testing them, and delivering them on a bunch of different topics in the climate energy landscape. So I'll spend just a bit more time introducing the concept of assessment. So as I mentioned at the start, assessment is an organized process for figuring out the state of knowledge overall on issues relevant to decision making. There are a few key features illustrated in this schematic. First of all, up there at the top, assessment is different from normal research and then it's not just scientists saying, here's my latest paper and it's very cool and adds to knowledge. Instead, it's scientists taking stock of all of the available evidence on any topic considered. Number two, assessment really gets fun when it's on uncomfortable topics. A good topic for assessment should be hard scientifically and it has real time relevance, often times with contested priorities out there in the world. To grapple with that complex contested nature of topics in assessment, the engagement of decision makers in a bunch of different ways in interaction with experts is a key feature of most assessment processes around the world. Third key feature of assessment is that as scientists, we can get really tied to the products at the end. Those beautiful reports for the IPCC last report weighed 48 pounds in total. However, assessment that actually has influence on decisions out there in the world, the influence is usually not really from the products in isolation. It's the process through which those products were developed. Some of the aspects of assessment that I'll describe in the slides to come are, first of all, the way it involves integrating evidence, very different types of evidence, quantitative model-based results with qualitative case studies, for example. The way it explores possible futures and how they connect to current decisions. And then in particular, how some of the complications and aspirations can arise in these interactions between experts and decision makers. Because I'll draw so many examples from the IPCC, I'm just going to introduce what it is briefly. You can think of it as a grand partnership between the scientists of the world and the governments of the world. The governments basically say, you scientists, if you follow our rules, will take your evaluation to be a definitive characterization of the state of knowledge, what we know and what we don't. Those rules have four key features. Number one, a mandate to be comprehensive. Any topic that gets cracked open has to consider the full state of knowledge on that topic. The report we led here out of Stanford had 14,000 references in it. Number two, every report undergoes multiple rounds of monitored scientific review, where anyone from around the world can sign up to provide comments. That report here out of Stanford had 50,000 review comments. Number three, the most unique and also contested part of the IPCC process is the line by line consensus approval of the policymaker summaries. UN style session where you have the scientists up on the podium. If you were the 195 governments, you'd be out there in this vast sea with your country flags. Sentence number one goes up on the board. It goes open for comments. You can raise any issues you have with the clarity or accuracy of that statement or you can see that things are getting a little dicey and pressure test the moment through different strategies. And that sentence is discussed until there's consensus that is the clearest, most accurate articulation of what we know. Sentence is gaveled and you go on to sentence number two. These are very hard processes, but a key feature of why governments leave the IPCC with unique ownership over a scientific product. Finally, all experts participating in the IPCC have tattooed on their eyeballs a mandate to be policy relevant but policy neutral. Despite the fact that this is literally engraved in stone in the rules and procedures for the IPCC, I'll argue that it's actually impossible to be fully policy neutral, and that's where things like the government approval really gets interesting. So doing assessment through the IPCC, you have incredible ability to learn in real time. So for example, in the last assessment report, we learned how to create a media snowball around the so-called global warming slowdown that basically never happened. But we also informed the Paris Agreement. For example, through a series of structured expert dialogues that happened over years, in particular informing the long term temperature goal. And yes, 1.5 degrees Celsius jumped out of nowhere in those last moments in Paris, and we can talk about that in the end. So I'll describe two different aspects of assessment in depth. First, some of the approaches taken to integrate evidence in assessment. And second, a deep dive into some of these science policy interactions, in particular in those approval plenaries. For each, I'll start with the IPCC and then jump to Stanford. So on the IPCC side for integrating evidence, I'll introduce two features that have been used over the decades and that were really advanced in the last assessment round called key risks and reasons for concern. These are two different methods for integrating the state of knowledge overall on issues most deserving of society's attention in a change in climate. These two approaches, key risks and reasons for concern, were the basis for some of the high level findings, like increasing magnitudes of warming, increase the likelihood of impacts that are severe, pervasive, and in some cases irreversible. They were approaches that could span the gamut in terms of hazards like extreme events, whether that's floods, droughts, or wildfire. But also to the slower moving game changers, like the potential for large amounts of sea level rise from ice sheet loss unfolding over centuries. Before I dive into exactly how these approaches work, I'll introduce a few examples of why assessment is hard. Example number one will be sea level rise. What this figure shows is the IPCC assessed range for the amount of sea level rise that it could occur in year 2100, jumping back to the 1990 report, extending all the way through to the 2013, 2014, 5th assessment report. And what's interesting here is what happened in year 2007, the 4th assessment report. So if you asked expert participating in that assessment, is risk right now bigger or smaller in your assessment, then it was in the last report, they would say it's definitely bigger. However, they reported this tiny, tight little range. So what happened? It was a moment of chaos in the scientific field. There were improved satellite based records at that point, as well as on the ground measurements that indicated that the flow of ice sheets, ice sheets don't just melt, they also flow. That dynamical flow was happening faster than could be explained, based on our understanding of the processes at hand. So what did they do? They basically reported the total amount of sea level rise excluding flow of ice sheets. So they excluded all the stuff that's really high risk but hard to quantify and reported this narrow range, which is really challenging. At the same time that there wasn't a clear way to grapple with the fact that they knew the numbers they had weren't right. Simultaneously through time, there's been a parallel assessment approach. This is a visualization of the reasons for concern, which I'll introduce in depth, but they've provided a complementary framework for talking about in this last ember here for large scale tipping points. That potential for ice sheet loss, even though it's hard to understand all aspects of how fast ice sheet loss may unfold. A second example of why assessment is hard has to do with negative emissions that Arun addressed in some forms already. So what this shows is a schematic in the black line there, the median of cost effective scenarios evaluated through integrated assessment models that have a good chance of limiting warming to two degrees Celsius. You can almost say that that trajectory of that black line is article four in the Paris Agreement, peaking emissions as soon as possible and driving them to zero within the second half of the 21st century. There's been a lot of attention to the light shaded part under the X axis here. And that's the fact that these cost effective scenarios of how the world might limit warming to two degrees Celsius assume vast deployments of negative emissions technologies, in particular bioenergy paired with carbon capture and storage. These deployments effectively double the remaining carbon budget for keeping warming to two degrees Celsius. Experts who grappled with these said, yep, that's equivalent to 25 to 80% of the land currently dedicated to agriculture globally. But it was easier just to report the numbers than to really provide those judgments. But in terms of the real time relevance, this figure is a nightmare. All I'm going to point out are two things. These blue plumes are these same cost effective scenarios for limiting warming to two degrees Celsius. And what this is, is the figure used to evaluate the adequacy of pledges under the Paris Agreement. Okay, so how is assessment grappled with some of these types of challenge? Number one, there's been a focus on risk. Taking a really inclusive definition of risk as the potential for consequences or something of value is at stake and the outcome is uncertain. You can think of the risks of climate change impacts or damages as arising from the intersection of hazards, the sharp end of the climate system, heat wave, flood, or drought. But that hazard isn't enough to give you damages that matter. It's how the hazard intersects with the vulnerability and exposure of people, societies, and nature. A focus on risk is helpful because it emphasizes that these interactions between climate and society are really important. It help connects our current experience with the climate system to how those odds may change as the future. Risk inherently emphasizes that the low probability high consequence outcomes matter. And finally it really emphasizes the fact that climate responses aren't different from most decisions we make which are decisions about an uncertain future. This risk approach was run through the key risks which use a series of criteria for high probability of impacts, irreversibility, or limited ability to respond to look across all sectors and all regions of the globe and identify issues most deserving of attention in terms of considering about danger in a change in climate. And this key risk assessment in particular looked at time frames for climate change. In the next few decades, even if we're incredibly ambitious in reducing our emissions of heat trapping gases, there is warming baked into the climate system. Very likely twice as much warming as we've seen to date. At the same time that our choice is about emissions now will be pivotal in shaping how much warming happens in the second half of the 21st century and beyond. So this key risk assessment looked at 142 key risks and how they're evolving from the present into the near term into the long term, as well as our ability to adapt. And then they were added up together to provide a global characterization of five different reasons for concern. So this isn't the version of the reasons for concern that appeared in the report, and we popped up the lid on the reasons for concern and indicated how through the bubble gum drops and the icons, different key risks inform our understanding of thresholds, where risks go from something we can detect to something that's severe, something where we have limited ability to adapt. And there are five different reasons for concern because values matter. Different people will put different emphasis on outcomes for today versus the future, the rich versus the poor, or nature versus economies. But across these different types of reasons for concern, which were the basis of that structured expert dialogue in forming the long term temperature goal in the Paris Agreement. Some of the consideration were to the most affected unique and threatened systems, or the unfairness factor, the third ember, or the potential for these game changing tipping points, whether it's ice sheet loss or ecosystem shifts. I'll just highlight one aspect of this diagram that's really interesting. So this is effectively supported by 14,000 scientific references aggregated into 142 key risks, informing five different reasons for concern. However, there are relatively few key risks on those embers where we can really say here's where we see thresholds, where the risk will change fundamentally. And you can respond to that and say, well, maybe we should just wait and see, maybe it won't be bad as we might think it is, or it can invoke precaution. And that's the type of reaction you get, especially when you put this into a global context. I'll conclude this section by just emphasizing that these types of multi-criteria frameworks for integration can apply not just in global assessment, but also at work going on here at Stanford. For example, a grad student, Miyuki Hino, in the EIPER program, has looked at all experiences around the world to date with managed retreat, strategic relocation to reduce risk. These are cases like buyouts funded by FEMA post-disaster, which have been all over the news in the last few weeks, through demandatory resettlements. For example, that's the Philippines after high end. And what Miyuki did was develop a cross-case comparative analysis, thinking about some of the key factors that drive the sociopolitics of retreat, why retreat is obvious as a solution, yet incredibly controversial and hard to implement, even though it's moved around 1 million people to date. And these examples range from circumstances of self-reliance, whether it's the indigenous villages in Alaska or the Pacific small islands, where residents are initiating the move, but they're the only ones who benefit, and governments have been slow to join with them, and there's been very little success to date. On the other end of this axis, you have examples like in the Netherlands, where there is a circumstance as part of an incredibly comprehensive flood risk management program, that they wanted to move some people upstream to create a big flood plain that would protect everyone downstream. And in this room for the river approach, they went door by door. So the residents definitely didn't want to move, they were just fine and safe where they were. But that negotiation process helped benefit society at large. We've also been taking an integrative approach through looking at this large potential for negative emissions and right sizing it. And this research has spanned the gamut from understanding the availability of bioenergy into the future, led by David LaBelle and some of his students, as well as our ability to get carbon dioxide underground, led by Sally Benson and her students. We've taken a top to bottom look at right sizing our expectations around carbon dioxide removal. First of all, considering exactly why the numbers get so big in those cost-effective scenarios that keep warming to two degrees Celsius. And the short answer here is that they're expanding land incredibly rapidly in terms of bioenergy expansion and forest expansion much faster than anything we've ever achieved with agriculture to date. At the same time that we've been taking a look at near term options and work led by EJ Beck working with Sally Benson. She's looked at counties in the US that have a storage basin for CO2 under them. Those are the color coded counties. And this represents about 100 million tons of carbon dioxide equivalent per year that could be injected in most cases with just one injection well per county. Okay, I'll close with some reflections on science policy interactions around assessment. And I'll start with these approval planaries in the IPCC. As I mentioned at the start, these are UN style sessions really important for the ownership that governments have over the scientific products coming out of the IPCC. That's a view of us on the podium, a view of the scientists speaking out to the governments. There are all sorts of processes that come into play if you don't easily get to consensus. So for example, on the informal end there are things called huddles. In this particular huddle at the center of the circle, we've got Brazil, South Africa, US, Switzerland, Ireland and Korea. And around the edges in the far corner there, you can see someone named Rob Stavens, a prominent economist from the Harvard Kennedy School who will come back in a moment. These are very challenging and also exhilarating sessions that usually go through the night, as I mentioned, and they've been diversely interpreted. So in the run up to Lima, which was the conference of the parties right before Paris, the mitigation assessment approval was really challenging. And there is a ton of coverage in the media. So for example, there was a series of perspectives in science called IPCC lessons from Berlin. Did the summary for policymakers become a summary by policymakers? Rob Stavens asked, is this government approval process broken? Other senior economists were saying things like we are still shaking and it left me depressed personally. But just two weeks earlier, we had our approval in Yokohama, Japan. And John Burnett, a human geographer from Australia said, I'm awestrucked. I've never seen anything like this and I doubt I ever will. I know it's not obvious, but his connotation was positive in the statement. The working group two co-chairs pushed back and said, yeah, these are challenging sessions. But if you're creative and flexible, usually they're not that hard to navigate. Jumping back to 2008 or 2009, Steve Schneider's memoir at that point was called Science as a Contact Sport, where science here was climate science and contact sport was these approval sessions. Jumping back to 2001, there was similar coverage for the third assessment report, Consensus Science or Consensus Politics. So despite the fact that there's been all this attention to the fact that these are really hard sessions. No one had actually ever analyzed how the documents change as they go through this process of approval. So that's what we did. And one thing that became really clear was that every single summary for policymakers that we looked at, whether it's in the 2007 fourth assessment report or in the bottom row here, the 2013, 2014 fifth assessment report across the three working groups, physics, impact, adaptation, vulnerability, mitigation, or in the synthesis. In every single case, the summary got longer. However, not every part of every one of these summaries became longer. And in particular, the two approval planaries that received the most attention for being very challenging and basically having a failure of consensus was the working group two fourth assessment report and the working group three fifth assessment report. And those are the two where you see red there. There were paragraphs in the documents that shrink dramatically or were lost in their entirety. So for example, in the working group three mitigation approval in 2014, there were 10 figure panels showing emissions that categorize countries by income or region. All 10 figure panels were lost. I think that's a record for losing figure panels. Rob Staven section on international cooperation met with a total failure of international cooperation and was reduced to 33% of its initial length. It's kind of a sad little skeleton of its former self. At the same time that you can take topics that have gone through multiple approval planaries and met with different fates. We had a box in working group two on dangerous climate change, where the ultimate objective of all the climate negotiations is to prevent dangerous anthropogenic interference with the climate system. Material on dangerous climate change is really hard to get through these approval planaries. So we ran it to 4 AM on day three, which was great. We got it through except that that meant on day four and day five as we were pulling all nighters here a little bit rough, worse for the weather. At the same time that we had a similar box on dangerous climate change that we lost in its entirety in the synthesis report. Just mentioned a few other conclusions from this work, although I won't show the database versions. So conclusion number one is that most of the time these policymaker summaries get longer, except when you get a failure of the consensus process, especially around politically sensitive material. Number two, we looked at what happened when scientists work with other scientists as compared to scientists working with decision makers. So the types of edits scientists would make when working with other scientists were all about getting the science right, unsurprisingly. They oftentimes were also addressing jargon. So if someone said I don't understand what radiative forcing means here, usually authors could come up with a synonym. But when the scientists work with decision makers by contrast, the vast majority of the edits were inserting examples, taking abstract high level findings and explaining what they mean across regions of the world for different policy environments. Third thing, as we were doing this study, there was another analysis that came out that said these policymaker summaries are incredibly hard to read compared to tabloid newspapers. And I might argue that that's a good thing, but we also evaluated how readability of the documents changes as they go through approval. And we compared them to more appropriate texts, assessment texts that have been very heavily edited by science language editors. What we found is that readability improved mostly in the way that you would expect it to improve for non-scientific audiences, in particular something called cohesion increased. Topics were connected across paragraphs so you could understand the document a lot more easily. But still there was quite a bit of jargon. And I think we're inevitably in a circumstance where an effective policy for summary for policy makers is gonna involve a whole suite of products from animations and videos to fact sheets and illustrations. So this government approval is very complex territory. It will remain complex, I'm sure, but it is also likely to maintain its importance in informing some of these international science policy dialogues in particular. And I'll just end by reflecting on much more informal work we've been doing in the science policy landscape, in particular working with state agencies in California as the state is looking to bring natural and working lands under its climate policy. California in so many different ways as you probably will hear through the course of this week is really an innovator in the climate action landscape and it's one of the first major subnational entities around the world to be ambitious about land in its climate policy. So we've been looking at things like how much carbon could we increase in terms of storage in the Sierras or in soils and croplands and rangelands throughout the state. I'll just mention one result from this work that is in particular looked at California's forest offset program. California in 2013 started the world's first legally enforceable forest offset program, basically creating emissions reductions equivalence through carbon stored in forests around the US. The dots there show the locations of different projects. And these offsets are part of the cap and trade market. Offset programs are important but also rightly controversial. So for example, to what degree should something like carbon stored in forests in the Southeast count as an emissions reductions equivalent for our cap and trade market here in California? What about the additionality? Are these emissions reductions that would not have happened in the absence of the program? Are they good for the long term? What about the different benefits that come into play for emissions reductions associated with refineries and disadvantaged communities in the state as compared to benefits yielded outside? So what Krista Anderson in this work found was that there are a lot of different ways that the program is working well in terms of its climate benefits. First of all, it's not very big. It's only five million tons a year. So it's not really taking the eye off the ball in terms of the cap and trade market as a whole. At the same time, that there are a bunch of different metrics you can evaluate both those directly reported and through independent hypotheses that get at the additionality of these emissions reductions equivalents. But what was also interesting was that there was in some ways in this program, through the 39 projects to date, an inversion of the conservation paradigm. So normally you have conservation nonprofits conserving their land through sustainable forest management, improvement of the watershed, and yielding carbon co-benefits. But here by contrast, you have things like timber companies or investment owners of land who wouldn't necessarily be motivated for conservation. But they're participating in sustainable forest management to get the carbon benefits and yielding benefits for water quality, recreation, and biodiversity. And I guess in the proliferating series of institutes and initiatives that Arun mentioned at the start, I'll just mention one more called the Stanford Environment Assessment Facility. And it's an initiative that we're just now starting here at Stanford to take some of what we've learned from global environmental assess on the world and use the university context to innovate, test, and deliver assessments on different topics. The first project underway is looking at the risk of violent conflict and how it may change in a changing climate. And our second project just now getting started is going to look at relocation, and so not just managed retreat, but that full spectrum of when people decide to move by themselves and how that can intersect with different policy levers. I will just say there are a lot of co-authors on the publications I showed throughout the course of the talk, and I think all of them. I think I have time for questions. Yeah, who's heard of the concept of a carbon budget? Better introduce it. Okay, so the basic idea is that when we put carbon dioxide into the atmosphere, it has nearly permanent warming. As it drops in concentration hundreds of years into the future, the ocean slow in terms of how much heat they can take up. So you put CO2 into the atmosphere, warming is near permanent. So what that means is that for any limit you want to put on warming, you've got a finite budget. And so for a likely chance of keeping warming to two degrees Celsius, that carbon budget, also including other forces as part of the mix, is three trillion tons of carbon dioxide. And to date, we've emitted two trillion tons about, and we're emitting a little over 40 billion tons per year. So if you do the math, it's basically 20 more years at current levels of emissions, and we will fulfill that carbon budget that locks in a good chance of two degrees Celsius warming. And so this is exactly what you were referring to, where there's about one trillion tons left that we can put into the atmosphere before two degrees Celsius warming with an incredibly tight time frame. So in some ways that's why, just in terms of a back of the envelope type of calculation, the integrated assessment models rely on negative emissions. So if you basically say, we'll also suck carbon dioxide out of the atmosphere either through planting trees or growing bioenergy and then injecting the carbon underground or through direct air capture, we expand the amount of carbon dioxide we can put into the atmosphere. There's some really important questions about this. So for example, if we assume that we'll have a thousand gigatons of CO2 removal by the end of this century, and then that doesn't work out, that's can kicking ethics for the generations at the end of the century who then have to deal with their problem. Second of all, we have very little understanding of the impacts of a peak and decline scenario. So the idea being that if you use vast deployments of CO2 removal, you can let temperature increase and then you can decrease temperature as you suck carbon dioxide out of the atmosphere. However, we know very well that the more we emit, the warmer it gets, the more emissions you get from permafrost, the more carbon cycle feedback is kicking the place. And so we actually don't necessarily know how easy it will be to just reverse course and we may actually be grappling with a lot more given where we peak our temperature as much as where we decline to. And finally, there's this huge land and water footprint of multi-billion ton deployments. So all of that said, we know that carbon removal is incredibly important, it's critical, and we need to be on all cylinders in terms of advancing our capabilities at the same time assuming that a thousand gigaton CO2 removal could appear easily is foolhardy and that's really what Anderson pushed in that and a whole series of things. He's actually got a, Glenn Peters has a new paper out in Nature Commentaries last week. Can you explain a little bit more about Offset Forest, how they work, and how they're utilized in nature? Yep, OK. So California's program is you, oh, good point. So the question is, how do forest offset programs work? And how might they grow? Is that the second part? Yeah, OK. So there's a lot of experience with thinking about the role of forests. So forests store a huge amount of carbon. And so in the international dialogues, for example, there's been a lot of attention to red plus, reducing deforestation, reducing degradation of forest. All that is basically intended to keep carbon in the land, recognizing that there are a lot of emissions from deforestation that happen on a yearly basis. California's program was the first one to have a legally enforceable mechanism for standing forests. So not just saying this is planting new trees, but taking forests that already exist, and for example, reducing the frequency of logging so that there is more carbon stored in that land through time. The approach taken through California's program is particularly rigorous, mostly because of the data that are available on forests across the US through what's called the FIA, Forest Inventory Analysis. So there's a good ability to really rigorously set the baseline of where that forest is right now and how that would trend through time, and also to calculate the value of logging. So basically, they're doing all sorts of evaluations of where you are in carbon on the land and whether that's accumulating through time in a way that is quite intensive in the back end calculations. That said, there's a lot of question as well, because there are other offset programs that are more on the voluntary market. So they're not part of a cap and trade program. They don't go through this legally enforceable mechanism that means they are very rigorously vetted. And there are important questions about whether those truly represent additional emission reductions equivalence that wouldn't have happened in the absence of the program. Can you please mention where are we on the development of technology to remove carbon dioxide from the atmosphere other than climate of peace? OK, I suspect Arun is going to get at this too. So you can think of carbon removal technologies falling in three big buckets. Bucket number one is essentially ecosystem stewardship. Thinking about how much carbon we can store in forests or in the land, associated with agriculture, other land use patterns. And those types of approaches tend to be cheap, kind of more like $10 a ton. For example, that's the carbon offset price for California's carbon market, which is pretty similar across the board. That said, there are questions about kind of the scope of them. So for example, there's a whole lot of carbon in soil, but it's not necessarily something we can change really rapidly at vast scale. So with ecosystem stewardship, you get all sorts of positive co-benefits. It's relatively cheap. We know how to do it, but there are questions about scale. Second bucket is bioenergy with carbon capture and storage. There is one type of bioenergy with carbon capture and storage in deployment today. It's 1 million tons per year. It's an ethanol bio refinery plant in Illinois. And basically with ethanol, fermentation, you get a pure stream of CO2 coming off of the ethanol compared to only 400 parts per million in the atmosphere. So it's relatively cheap to capture that, compress it, and inject it underground. So a lot of the work that we've been thinking about in particular is saying, okay, well, what if you scaled ethanol CCS plants? Or what if you took this county-by-county approach where you don't need pipelines, recognizing that pipelines are hard to build? But all of those types of projects are kind of at the millions of tons scales. They might get to a billion tons, but just easily assuming that we'll definitely get to 15 billion tons as in those scenarios is hard to imagine, at least in the near term. And then the final category that gets a lot of attention is kind of the fully engineered approaches and what gets the most attention for good reasons in that category is direct air capture. So basically extracting CO2 from the atmosphere through different enzymatic or other chemical means, and then also injecting it underground through carbon capture and storage. It's more like at the $500 to $1,000 ton end of the spectrum, so by far the most expensive. It's also very energy intensive. So until we get to a point where we have abundant clean energy, we're unlikely to really scale that at the same time that it could be very important. There's a plant running in Switzerland now. Really good question. Okay, so the mandate for the IPCC is to provide policy relevant, but policy neutral assessment. And so the idea for policy relevant is that there's something in assessment called salience. So as scientists, we might think, oh, there's so many cool questions about how the world works, but in an assessment, you're usually just focusing on questions that are relevant to ongoing choices and actions. What are the risks that most need to be grappled with? How do we understand the implementation of options, much less their evaluation of what they are, and how do we advance implementation through time in particular facilitating learning? So policy relevance is basically saying this is addressing questions that matter in the real world, not just questions of scientific importance. But policy neutral would be this thing where, for example, you avoid the word should. So the assessments aren't necessarily saying this is what the US should do or California should do. It's an evaluation of the effectiveness of different approaches. And the key distinction there is that for almost any action in the real world, values matter, and so you can't just say there is one scientifically true answer to how California should aim to meet its 2030 goal to reduce emissions by 40%. There are deep uncertainties, there are winners and losers. There are a whole bunch of different approaches that could be adopted that often go beyond kind of a, I guess you could say that assessment is basically trying to take the state of the science, juxtapose that with all the values that matter and show that in its richness without just saying we've decided that this one value frame is most important. In terms of why it's impossible, you could even say that how we approach statistics where we have a type one error version, that's a value-based choice that actually carries a lot of implications for how you communicate risks. So even the most technical aspects of science carry a lot of baggage that affect how people would understand what you're telling them. And assessment that hasn't necessarily grappled with that level of complexity very well in a lot of different examples in the past. Do we have tools to understand whether stuff like hurricane, Kerma, is it because more of natural induced climate change or more because of human induced climate change? Great, so I think an easy way to think about it is that in almost every case you're not looking at climate change producing extreme events, it's really kind of how climate change is amplifying extreme events. And I'll just point to a few different things in terms of the state of the science. So first of all, if you jump back a decade, the scientific line was that you can't attribute any one event to climate change. In many ways that's changed a lot in that there's now something called single event attribution. Here at Stanford, Noah Diffenbaugh is the leader in that research and has been very prominent in a lot of the dialogues around using that type of science. Where you're basically saying for different extreme events whether it's a heat wave or a flood or a drought or a wildfire, how much more likely was that event in terms of its frequency or its intensity due to our emissions of heat trapping gases? And there are a variety of different ways to approach the single event attribution and what's happening now is that there's something called the World Weather Attribution Project where an event happens and there are actually research groups around the world who have just started their computer models and are in real time calculating the likelihood of that event. So we will definitely see, I'm sure, many analyses of Harvey and Irma in terms of exactly what happened. But I think the easy way to think about the backdrop more broadly is a few different aspects. So first of all, any storm that strikes shore now is acting on top of sea level rise. For Sandy in New York City, it was acting on top of one foot of sea level rise. That was actually many thousands of homes that were flooded, many billions of dollars of damage additional. Just given that sea level rise amplification alone, Harvey landed on shore in Texas on top of half a foot of sea level rise. So you've got that kind of amplification. Number two, a warmer atmosphere can hold more water, Clausius-Clapeyron relationship. And so there's an increased potential for extreme precipitation. For example, in Harvey, the really amazing fact was how much water fell in that event. And third, a warmer surface ocean and actually how deep the warm surface goes is really important for determining how intense a cyclone or here, the US as we call them, hurricanes can become. And what happened with Harvey, for example, is that you had warm surface water that went especially deep, fueling both the amount of energy going into the storm, as well as the amount of water it picked up. And so those types of factors are kind of rules of thumb that apply across all the different cyclones happening to date. And though there may not necessarily be an increase in the frequency of cyclones, what's been very clear through the science, especially around the North Atlantic and Gulf, is that there's been an increase in the most intense hurricanes. Can you tell us a little bit about the selection process for working group alters and what the path would be to be involved in the future version? Yeah, definitely. Okay, so for IPCC, I'll describe that, but I will just mention that there are gazillions of assessments, so there's a biodiversity assessment, there's an energy assessment, there's assessments of the Arctic, so there's national climate assessment. So I'll describe IPCC, but there are similar processes that come into play for many different assessment activities. So the assessment authorship for the IPCC runs through country nominations, so anyone can self-nominate. So for example, I went to the 1.5 degrees Celsius scoping meeting, not as a staff, but first time as an expert where I put my name forward, and then the government, in terms of the agency coordinating that, will vet all the experts and advance names to the IPCC, and then a subset is selected. That said, a really important way for students to become involved is through something called the chapter scientist role in IPCC reports. So basically every single chapter for the IPCC had coordinating lead authors who are responsible for delivering that assessment no matter what happens, then there are a bunch of lead authors, and then there's also a chapter scientist, and that chapter scientist is basically in there with the author team doing all of the assessments and helping, especially in terms of bringing all of these experts together to make it all happen. Okay, can you talk a little bit about the methane emissions for the CO2? Great, okay, a whole bunch of different directions that could go with methane. I think one interesting aspect when you think about different forces or different things that can change the energy balance and increase warming of the planet. Some of them are really long-lived, like CO2, as I described at the start. And some have a shorter term life in terms of their bang, the warming that we have. And so methane has a really big warming impulse that happens within 10 years of its heading into the atmosphere, but then that comes down through time. So when we think about the end game energy system with a stabilized climate, that basically has to have zero emissions of the long-lived forces, but there can be continuous emissions of the short-lived forces and how much emissions there are is kind of an important modulator of what temperature you've reached along with your total emissions of the long-lived forces. Methane is interesting in that it has both natural and fossil fuel-based sources. There are things like leaks that happen from extraction where you'd say we actually have good best practices available for reducing those and there's a financial incentive to capture methane because it has value. At the same time that there are harder natural sources like the methane that comes off of the permafrost along with CO2 as it melts and thaws. Methane is also interesting in that there has been a proliferation in the science of understanding exactly what methane emissions are. In particular here in the US, for example, if we take the EPA estimates, basically kind of an inventory analysis where you say we've got this many cows and this many wells and here's our bottom-up estimate. As compared to the best top-down measures that are essentially using inversion modeling, right now those numbers are at odds and grappling with that methane budget is an incredibly important aspect moving forward. Last question. If you can change the IPCC mandates, how would you change that? A lot of different ways I can answer that question. Okay, so I think maybe the most important framing point is that these three features of assessment, salience, does it address questions that matter to anyone, decision makers like the national governments that participate in the IPCC? Credibility is the science right. Would other scientists say this is a good product? And then legitimacy. Would the decision makers who are participating say that was a fair and inclusive process? And even though I can't as a decision maker open up the 14,000 references and have a hope of understanding them, I believe that the process should have gotten the science right. So those three things are always in tension. So for the IPCC, the big plum in terms of what it represents on the global landscape is the fact that national governments are the convening authority. But that big plum comes with a whole bunch of constraints in terms of what you can do in the realm of salience or credibility. So for example, it's not necessarily a great place to try radical new innovations because if you wanna do something radically new, you have to have 195 governments approved at first, which is really hard to do. That said, the IPCC is really good at looking at what's happened in other places and saying that worked well there, that's a pretty conservative thing that we can move into our process. I think what the IPCC represents at best is basically a conservative assessment of what we definitely know about the change in climate as a basis for decision making. What it does least well is radically try new things or go down to the local scale. That global to local bridge is really hard. So if I were to say the easiest thing that I think the IPCC could do a lot better is how it links with all of the other assessment bodies, recognizing that there's no way it could ever be the agent that's gonna go down to the city scale and every county around the world. But it could better link with all the national governments that are doing their own assessments and how that intersects with the private sector and nonprofits as well.