 Stanford University. My name is Liang Ming. I'm the Managing Director for Bits and Watts Initiative and also runs the National Alliance for the School of Sustainability. So we are doing the final session for today and as a moderator I want to say we saved the best for the last. As a host I cannot say that. Every panel is great. It's the best. And so I have three goals for this session. The first one is really reflect and repeat the five project goals that Enesh mentioned in the beginning of the day. Then the second goal I have we have these wonderful consultions. We have 16 universities, four national laboratories, two industry consultions working together on this initiative. This is just a starting point. We are not building the stuff from scratch. No, we have this initiative we have great foundations there. So I want to discuss with my panelists really what we have done in the past laid out a foundation for the earnest. That's the second goal I have. The third goal is really about talk about what is expected fundamental research we are going to do for earnest which support the five goals we have for this project. So I would encourage everybody to think about it and to carry your question and ask us questions to engage in this conversation. So joining me today from the right to the left we have Jeff Degle Chief Engineer of Pacific Northwest National Laboratory. Professor Priya Dhondi Assistant Professor at MIT and also the co-founder of Climate Change AI. Professor Jim McCauley from Iowa State University and Associate Professor Ram Redjakoba from Silver Engineering Department of Stanford University. So my first question to all the panelists is really want to share with give you opening remark, share with all the audience what you have done, your organization or consortium have done in the past which helped earnest have this great foundation so we can start with. So I will start with Jeff Jeff and I worked almost 10 years ago on the green modernization consortium together for a long time on both operational side and also the planning side and he has been involved in many national academy reports. Some of you, some of the reports you have seen the first slides of in-age produced this morning and also you're part of the team for the 2003 blackout investigation. So you involve a lot of national efforts regarding the reliability and the resilience. So could you please share with us what lessons we learned in the last 20 or 30 years for the national efforts in this area. I took some notes. Thank you, Liang. It's been great collaborating with you and the rest of the team. So I did want to hit a couple high points. So Liang mentioned some of the national academy's study reports that have been done in the past that are pertinent to some of this research. The first one I'll mention briefly is in 2016 there was a report on analytic research foundations for the next generation electric grid and that included some recommendations about algorithm work also pointed out the need for open source software development. It ties in very nicely with the NARM effort that JP talked about earlier and that's been a recurring theme throughout the whole day. And one of the things that that report talked about is maybe the use of synthetic data for doing some of the analysis or algorithm development that could be transferred to the algorithm industry but test it in the academic world with the synthetic data. And one of the chairs of that study committee was Dr. Tom Overby and he's really been working on developing these synthetic datasets. The next one I'll mention is the report and this came out in 2017 the cover was on the earlier slides this morning that Lange mentioned. And this report really dug into resilience. It pointed out the ways that resilience and reliability, even though they're related they're very different concepts. And it talked about recommendations for enhancing the nation's resilience. And I can get into some of those recommendations but I'll just point out a few quick examples. There's a lot of architecture to reduce the criticality of key components. And one of the study authors Terry Boston who ended his career before he retired as the CEO of PGM he had a saying that the best way to secure critical substation is to not have any critical substations. Right? And so there's things you can do architecturally to improve the resilience. We also had a whole chapter on cyber resilience. And there's a lot of people that are interested in cyber security making sure that the systems are protected against adversaries and that sort of thing. But cyber resilience is a deeper concept than that. How many utilities have a real black star capability to bring up their cyber systems if they've been compromised? And so when you do have a major cyber incident recovering from that can really be painful if you don't have tools for that. There's a whole section on distributed energy resources. And I think DER is a real fascinating thing to think about in the context of resilience. In the early days of DER deployment and again a lot of that was really led out of California with some of the early rules that utilities had for integrating DER into the system. And what you wanted to do is get the DER tripped off quickly if there's a system disturbance for safety reasons. You don't want to be the feeder if there's line crews out there restoring the system. Once you get past a certain amount of DER deployment, which is where we are now the current IEEE standards, talk about ride-through capability. You don't want to trip off quickly. You want to ride through allow the protection schemes to top right but otherwise stay on and ride through the disturbance. Where we need to go next with DER is using these resources as a building block for bringing the system back on. Islanding an emergency, helping with block start restoration, that sort of thing. We're not quite there yet with those standards. So there's a number of those types of things that are in that report. It also talked about some non-technical things in there as well. Earlier today we were talking about metrics and I really liked Gene's comments about it's not only what you can measure but what you can model, which is great. But one of the things that this academies report talked about is how do we pay for resilience? Reliability is paid for with rate payers. And resilience, because of the societal benefit of having a resilient infrastructure, maybe a blend of rate payer and taxpayer funded initiatives might make sense. And there's precedent for that. When you look at after Superstorm Sandy the state of New Jersey put in a few billion dollars to help improve the resilience of their critical infrastructures. There are many examples where governments have done grants. You know, there's current a grant program that Dewey has for enhancing resilience. That's a model for how we can have a blended taxpayer and rate payer funded way to improve resilience. So those types of things are in that report. And then the third national academies report I'll mention is the future of electric power report. This came out in February of 2021. That was the same month that Texas had its issues. Coming out with a great report that same month was kind of interesting. But one of the things was much broader than resilience in that report. And one of the key highlights in that report that I'll mention is that when we think about things like competing priorities of the system like resilience, like affordability, like sustainability and low carbon emissions, those some of those, sometimes technology can lift all of them. But sometimes you have to make engineering trade-offs. And really having a good decision process for how we're going to make those trade-offs I think is really key. And that ties in very nicely with some of the panels that we had earlier today. So I'll pause here on the academies reports and maybe I'll have a chance to talk more about some of the things I've done in the past. Wonderful. Thank you, Jeff. Let's move on. In this and I will talk about your wonderful great product of Inishi's team and for disclosure. She was an English PhD student and working together on many topics you heard this morning. And more importantly when Secretary George shoots here, you always encourage the partners between the MIT and the Stanford. Can you share with more than that, you're also leading a community organization. The Climate Change AI really answers some of the questions the audience asking how the data science AI can help support these energy transations bring the equity resonance into the formula. Can you share with us the work that you have been doing, MIT has been doing and the Climate Change AI, CCAI has been doing? Absolutely, yeah. So I also brought some notes. So the MIT team on the earnest project is made up of kind of seven faculty members and their respective post stocks and students I think are really just excited to bring a lot of past work on kind of open source metrics and open source tools to bear on this project. And so one of those sort of sets of tools is tools to actually help us understand at a more kind of spatially granular level what the emissions impacts are of kind of using an additional unit of electricity on the power grid but kind of even going further from that what are the societal damages associated with that. So when we think about things like the social cost of carbon but also things like what are the impacts on people's health and kind of well-being that come from for example pollution effects. How do we actually get tools to allow us to quantify what's going on on the grid with respect to that today and understand then what the kind of benefits are associated with transitioning to kind of better strategies for grid management and so some of this work on what are called marginal emissions factors and marginal damage factors and trying to calculate them from historically for the U.S. grid is work that Inesh, myself and many others have been working on and also I think have been really built upon by I think Inesh shared earlier the work from herself Jacques Chalendaire, Sally Benson on kind of real-time estimates of emissions on the grid today. So I think really having these tools to really quantify emissions and damages from grids is going to be really important to provide a baseline for what's happening today and how we kind of change those emissions and damage-related effects going forward. In addition so I think Jeff gave a really great kind of overview of the kinds of ways we need to think about resilience and reliability on the grid and I think there's been a lot of great work there but I think as Jeff pointed out I think a lot of concreteness needed additionally in the types of ways that we measure and benchmark these things and think about also the contextual factors and the locational factors associated with them. So, you know, several faculty members at MIT Maria Elish, Sarah Bameen, Jessica Transich have looked at different aspects of trying to come up with metrics for this. So for example, when we do have to make hard trade-offs about reliability in different places, can we come up with differentiated reliability of service metrics that take into account conversations with communities about things like the needed level of reliability or willingness to pay and equity-related considerations? Can we think about metrics that stem from quote-unquote like attacker-defender frameworks or really an understanding of if something happens on the grid what is your recovery strategy and what is the cost of that recovery strategy and how do you incorporate that understanding into your notion of kind of what the cost of reliability or resilience is. And then also as we start to plug in things like, you know, electric vehicles to the power grid, what do we need to think about? What do we need to think about with respect to the quality of service to the electric vehicles or the effects of those electric vehicles on things like peak demand or solar curtailment or other kinds of things on the grid. So really trying to think through those metrics. So there are really a lot of angles to this and so I think there's a lot of kind of prior work to bring to bear on how you actually think about kind of writing down the different axes of effects you need to think about and actually quantifying these. And then in addition to metrics, right, there's a whole focus in this consortium on the actual open source tools and decision support systems that we can use to actually make progress on addressing some of these issues and so the MIT team has definitely brought in kind of a lot of prior work on just sort of open source tools that try to model not just the electricity grid but also kind of how the grid interacts with other systems like natural gas or hydrogen or carbon or you know, various other things that we need to think about in order to bring this forth and so work from, you know, Dariq Malaprakadasarabameen, Maria Illich looks at and Jesse Jenkins at Princeton also part of the consortium looks at providing tools like GenX for power system modeling or Dolphin for power hydrogen coupling or modeling liquid fuels or J-Pong for power gas or dynamic monitoring and decision systems to couple between sort of planning and operations and power grid sort of all this like suite of tools that I think we can really bring together and also potentially augmented by machine learning given that as you start to bring together all these different systems and model systems at much larger scale you start to be faced with really large scale optimization problems that you have to resolve and so how do you use machine learning or other tools to try to solve those faster and the last thing I'll bring up is work from Christoph Reinhardt at MIT building side of things so again this consortium is looking not just at the bulk power system but microgrids and also buildings and how they all interact with each other and so Christoph has previously put out ubem.io urban building energy model which you can really start to think about how do you simulate how buildings are actually operating in a city and how do you use that for urban planning or for the reduction of carbon dioxide emissions or building level strategies and coupling buildings with grids so I think there's some really cool tools here that I think we're excited to pull together for this. Wonderful thank you Priya Jim so Priya talked about AI to accelerate optimization, simulation I remember last time we worked together it's almost like a 10 probably more than 10, dozen years ago we worked on the hypomes computing machine how we can speed up time to mean simulation in a fast and real time and it was wonderful work so you have been in the United States because in the United States we do not have a national transmission plan I think Jeff may comment on later more on that and in this country you have been an academic world really promoted the concept of micrograde inter-regional transmission these kind of things so can you share with us the recent work you have done in this area and what state of power engineering program and also maybe other consultions you have been involved thank you it was a pleasure to hear both of you talk and I look forward to your comments as well Ram I actually am very pleased to be back here in the Bay Area because I spent five years in the 80s working as a transmission planning engineer for PG&E one California street next to the Embarcadero there so I love the area and really to be back here and it was really there that I started learning and understanding as a very young engineer 20 something years old at the time what reliability evaluation looks like transmission planning generation planning the integration of the two and resilience was an award then right we didn't really understand that as a contextual way to think about the kind of work that we were doing I have really excellent opportunities since that time I left PG&E in 1990 went back and got my doctorate and then I moved to Iowa State University where I am now in 1992 and have been there ever since I'm in Iowa now and glad to be that but I've had the opportunity to look at extreme events in a few different ways the first if you remember almost 20 years ago now Katrina Rita Hurricanes that really destroyed the Gulf Coast in all sorts of ways that took six weeks really just to bring customers back and more importantly what I learned in that extreme event was that it's not just about customer interruptions it's not just about bringing the distribution system back although it is that but there's another dimension for many of these events not all of them but many of these extreme events in the sense that it has a long it can have a long term impact on the wholesale price of energy and what we learned from Katrina Rita because if you remember it took out about 80% of that Gulf Coast gas supply and at the time we didn't have hydraulic like fracturing so most of the US gas came from the Gulf and almost overnight we lost 80% of that supply and so all of the gas utilizing utilities in the United States felt that and the subsequent impact on the electric prices were also observable and not just for a few weeks or a few months but for almost a year it was incredible and what we were seeing was 10-15% increase in the wholesale price of electricity throughout the United States sometimes 5% in certain places in the Northeast it was higher and so forth so this kind of impact this kind of sequence of impacts in the way resilient high extreme events propagate through our society is a really complicated but important feature to understand on the other side of that coin I was sitting in my office in 2020 in Ames, Iowa about 9am and it was a beautiful day and I looked out my window an hour and a half later and it was pitch black go figure I didn't know what the world I thought I was in World War 3 but we had what we call now a derecho right and within a half hour the town in which I live was really destroyed it takes me usually 5 minutes to get from my office to my car another 5 minutes to get home so it's a town of 60,000 people with the students but that day it took me an hour to get home because I was having to figure out ways to avoid the trees that had fallen all over the infrastructure poles were down etc so this was the other extreme right we had a huge impact on our distribution system and I would offer that these two dimensions you know the impact on wholesale prices and the impact on customer interruptions at the distribution level are two reasonable ways to sort of think about these extreme events that we tend to worry about when we look at resilience so maybe I'll kind of stop there one more comment and Liang mentioned it I've been doing a lot of work over the past really 15 years on thinking about what I refer to as multi-regional transmission so you've heard of intra-regional transmission that's what all the RTOs do and they do it very well and then you know since FERC Order 1000 I think we've been hearing about excuse me inter-regional transmission right between the regions between PJM and MISO between SPP and MISO and the real interesting feature that I've been working on that I think we're just starting really to think about in a serious way is multi-regional transmission and I would offer that we're already you know in some sense making those designs when we build the infrastructure or when we design the transmission infrastructure for the east coast wind offshore and the west coast wind offshore these are inter-regional transmission grids when you look at those pictures of people's designs and now all we got to do is really connect the mid section so hopefully I'll have a little bit more time to think to talk to you about multi-regional transmission as we go through the panel discussion here today thank you wonderful thank you Jim next but now this is my colleague and actually panning crime with Ines and I we co-director the bits and watch initiative and which laid down the foundation for many of things we are doing here and Aaron mentioned during his opening remark this morning a lot of wonderful work you and Aaron have partnered together regarding the you know deep solar the sci-di sci-fi calculations and underground efforts so can you elaborate a little bit more so what you have been down in the last several years you know really help lay down the foundations for the earnest yeah the the basic principle of the work we have been doing here and the we here is a collective and Ines myself starts with the acknowledgement that traditionally infrastructure design is a supply and demand problem solved from the top down you aggregate all the demand that you have you have a criteria that you need to meet it with a certain you know likelihood or probability and you build enough supply to to match that and then as things happen in a large scale you know it gets distributed spatially across a region but as we start to look into the grid today and the issues that we are all facing there is concerns about communities individuals people and when you start to take a closer look at the grid and try to understand from a perspective of a community what does resilience mean what does reliability mean we discover that we lack a few things first we lack tools that help you map what resources these communities have deployed how has different policies affected adoption patterns of different technologies for example solar storage etc second we need tools to project and understand what's the reliability at this local level and how does it get affected by different events your climate events and then third is this influence of climate itself how does it happen in infrastructure well traditional climate data is not really prepared to be combined with infrastructure analysis so you need to figure that piece out so what we try to do at Stanford is create tools that we have released in the open source both the results and outputs of our tool as well as the tools itself so that anybody can reproduce what we have done to actually close these gaps so it ranges from things like being able to map all the solar panels in the United States and the date of their installation like Rune mentioned this morning down to what is the largest it will be the largest reliability database for the United States where we have accumulated not just the federal data that's available but one of my students went and collected every possible record from libraries, public sources calling utilities and built out the longest time series of reliability on every county on the United States and we'll say well why do I need that well if you want to understand for example climate resiliency these are rare events I do need long records to start to understand that so that's kind of has been the driving principle this community based thinking and we then close the loop by looking at a few opportunities one is we have done surveys with actual communities in work that Ines has done and also a researcher from my lab June Flora where we found out combining the you know smart meter data plus weather indicators and survey responses what is the actual impact of climate events on people do people actually care for electricity being super reliable and to my surprise actually most people are okay if like a few hours during the year you ran out of electricity yes it's upsetting the real issue starts when it's multiple hours multiple days and things like that and I think that's a change in perspective of how we design the grid because if you think there is a single reliability that applies to everybody or if you ask people in these communities what is the value of electricity to you economically what's the value of loss load they can't come up with these numbers they don't know what reliability they need they don't know the value of loss load but they know the experiences they can have so kind of starting to close the loop through the survey mechanisms and figuring out how to incorporate them to engineering design and optimization and then going all the way to micro grids and we have it then Sambor if he's still here where we tend to think yeah it'll be great if we can all you know operate off grid the reality of it is if you look at the option of technology right now the wealthiest people can operate off grid the poorest people are more and more dependent on the grid and so on average maybe you are improving but if you look at a community level you may not be improving and how could you mitigate that so then you have to start thinking about what policies are actually working and we find for example commercial solar works to close this equity gap and what if you know commercial sites could offer micro grids that connected to the communities around these are the kind of questions we want to be able to answer in earnest is integrating tools from multiple areas taking a community perspective and coming up with actionable insights that's the basic goal wonderful if I allow them to talk they can spend the rest of 25 minutes to all talk about history but we need to look forward so earnest has five goals in the last almost like two hours and we spent a lot of time deep dive into the pilots that's one of the goals we try to replicate model be able to do the regional province the islands province and the city province and the first goal we have for earnest is establish the baseline matrix for resilience for emission and for equity that's the first goal we have the second goal we have for the earnest is develop open source US, North America open version data base and the tools so you heard the wonderful work by multiple organization more than just four of them here but also other 12 universities represent in this room so I will start with Jeff then we go this order can you share with us based on his historical work how you think about earnest can help achieve the two goals we talk about here the matrix and also the open source model and the tools absolutely on the metric side as everybody has been pointing out there's some interesting work that started but we're not there yet in terms of really having widely adopted firmly established metrics for resilience the reliability metrics are pretty good but the resilience metrics have a ways to go and as part of the regional grade modernization initiative that was started about 10 years ago some projects looking at developing a framework for developing metrics and one of the things that was clear for resilience metrics is you have to model those and simulate them you can't wait until these events have occurred to measure your resilience you have to do some predictive analytics to understand how your system is going to be resilient before the big event happens that you're planning for so it's really more about preparedness and what the system is going to do in response to different hazards and so that was one of the grid modernization laboratory consortium projects multi lab team there on that and that ties in very closely with the modeling and so Narm that JP talked about earlier that some of the other co-simulation tools like Helix being able to bring together different domains and do co-simulation that was also supported by DOE on the grid modernization lab consortium and the current round of how this DOE program is morphing the grid modernization initiative has been recently reformed there's a new set of task teams there's a summit in Washington DC next week I'm not really advertising it because the registration is full so if you're not already signed up sorry but that's going to be sort of an event to really kick off the new DOE GMI vision we're developing a technical roadmap there's also a pretty large cybersecurity part of that summit as well but then this modeling work ties into some other related things that are going on Liang alluded to and Jim talked about this multi-regional planning and there's a current project under way that PNNL and then we're all working together on called the National Transmission Planning Study and this is really looking at continent-wide modeling and the goal is how do we encourage transmission to enable more renewables to achieve our clean grid goals the nation wants to have a decarbonized grid by 2035 and transmission is going to need to be a key part of that because we want to tap into solar in the desert southwest and wind in the midwest and we're really looking beyond what the Fort Quarter 1,000 regions are doing looking at it from a nation basis so that project is combining expansion tools with production cost models and power flow analysis for doing resilience extreme event analysis and that sort of thing it's all part of the daily building better grid initiative through the grid deployment office and with as has been mentioned many times throughout the day there is a lot of federal funding right now floating around the system with the bipartisan infrastructure bill and other sources of funding but that also ties in so another sort of thread I'll pull on that a little bit is how do you validate these models how do we know other than waiting around for big events to occur and as an industry electric power industry is really driven by events right you've got the 65 blackout had a lot of changes and during my career arc there's August 10th 1996 blackout which is a big deal we had the August 14th 2003 blackout which is a big deal Hurricane Katrina was a big deal you know SuperSorp Sandy was a big deal what happened in Puerto Rico was a big deal what happened in Texas in 2021 was a big deal so these are these are things that we respond to reactionary but there's also ways to validate models without waiting for those things to happen and so one of the other projects I've been involved in in my career is related to synchro phasers and the PMUs in fact it was pretty cool for me personally when Dr. Chu you know former secretary of energy pulled out the phasor measurement units as his example of impact from the American Recovery Reinvestment Act from 10 years ago because that was stuff I was working on and so I've been leading the North American synchro phaser initiative on behalf of the DOE since it started in 2006 and the main purpose of that is to gather data on the grid so we can use that information to enhance the accuracy of our models and in the early days of synchro phasers that was great when we had a grid that was full of synchro machines you know gathering data 30 times a second it's time synchronized and you can really do a lot with that and analyze the dynamics on the system and oscillations things like that in the electromechanical realm but now we have all these inverter based resources and they're doing things that are unexpected you know we had a blue cut fire if you've heard of that that would happen here in the you know desert southwest part of the country and that was a normally cleared line line fault that was interpreted by inverter based resources and over frequency event and so they thought they would be good citizens of the grid by curtailing their generation output that occurred during the time that that fault was being cleared and again normally cleared fault so within a few cycles that is not what we want on the grid is to have a line fault followed by a couple you know 100 megawatts of generation loss and then the same thing happened in Texas in Odessa they called it Odessa event and then happened again later and it was even bigger and so NERC is really troubled by this trend of having these inverter based resources not behaving correctly during these transients and faults and things and so there's a lot of work to be done there in terms of enhancing the models of that so I'm hoping that you know Ernest can help contribute to that I think the collective horsepower of the all the universities and labs that are involved in the project can you know dig into aspects of it but it's bigger than even Ernest can tackle this is going to be something that the whole industry is going to really have a lot of reckoning to do over the next several years Priya can you share some thoughts on the respect of time be more specific on the emission matrix because you know you're being involved in marginal emissions you heard from the consumption based emission dismounting and also the open source because you're been leading a lot of AI open source and including modeling efforts can you share some thoughts on what Ernest can help achieve the goals absolutely and I think like succinctly I think that the thing that Ernest is really bringing to the table is stakeholder driven or use inspired research in some sense the mechanics of a lot of what I think we'll be doing as a consortium are similar right trying to make models more granular trying to connect models up the ways in which you do that or the ways in which that's useful you only really understand that or see that come to bear when you're actually working with the organizations that need to use those models or the end users who will be affected by the consequences of those models as I think you saw a lot with the pilots before really like every single pilot had some right end user had some you know utility or system operator right somebody who would be kind of a user and the goal I think is to really stress test these to see how useful they are and to see what technical challenges emerge that we have to address in order to make the models more useful rather than kind of making assumptions based on computational tractability that aren't really reflecting what's going on on the grid or in our building so succinctly I really think it is this stakeholder engaged research in the way that really drives us to develop the tools in ways that are going to be useful when the rubber hits the road let's move on Jim can you share how we can create because FERC wants out and it's really transparency is one of the key components right how the open source tools and models can support your vision of a multi-regional connections yeah so to support multi regional connections as well as really any other kind of solution I think it's incredibly important to be able to evaluate both the impact of a particular solution on the resilience of the infrastructure as well as the impact of the particular solution on the normal operation of the infrastructure right because there are indeed some solutions for resilience that don't really contribute to the normal operation of the infrastructure case in point a colleague at Iowa State that I have very good colleagues has just a wonderful R&D activity building mobile energy systems essentially there are crates on the back of a truck and got a diesel generator with solar and storage and during an event like I mentioned earlier where we had our lights out in the course of an hour one morning in 2020 you can pull these crates to critical loads and supply them for as long as you need right this is a wonderful opportunity to build in the minimization of outage time of different loads during an extreme event but yet it doesn't do much for the normal operating condition of the infrastructure system and indeed there's you know transmission that could be built that would facilitate some kinds of resilience issues as well as normal operation so looking at the different kinds of impacts of solutions and then having the ability tool wise to evaluate both sides of that coin and then from a user point of view say you know I can get this much resilience and I can get this much benefit in terms of normal operation and what's the tradeoff and can I dial in kind of a desire on one side and desire on the other side and see if I can reach it and study the balance between the two as you probe and understand different solutions I call this a resilience base capacity expansion planning application so it focuses on investments but also has the capability to understand the impact on the infrastructure with respect to resilience kind of solutions I'll stop there. Ram and you know one thing I learned from you is the importance of the communities anything we produced whatever equity and resilience has go down to the zip code level and to see really what's the impact to the communities so can you share some thoughts regarding the things we produced in the past how we can help earnest to achieve the goals we have here I think one of the exciting things about earnest is that by integrating our ability to produce analysis, simulations and optimization at this much more granular level we can now integrate that picture into what some of the other speakers have talked about which is there is a traditional process for capacity expansion planning there is ways in which people design microgrids that's kind of the existing practice so how do we connect this more higher resolution data and the insights we get out of it and plug it into these tools that do the tasks that we have essentially in a way standardized on in the past and I think the other aspect that I think is super exciting is it's not just about a resilient a resilient grid we are also talking about a decarbonized grid and having these two goals in mind at the same time I think is very challenging and it's not something that we have really put our heads together to think about how to achieve that I think that is also very very exciting and I believe our more increased resolution modeling can help us understand that okay before I switch the idea to the other goals I want to make a stop here and see any question you may have from the audience regarding the baseline matrix resilience, resilience emissions and equity and also the open source model and data sets resolution to get locations that with also the need for long temporal data sets to get those extreme events because usually in models we either do high spatial resolution or long data so I'm just curious in the open models how we balance those I think that's a great observation and the models are really tuned for usually a specific niche of the system and the grid itself is an amazing system because at the temporal scale we're doing things at the planning level where we're trying to figure out what to build all the way down within a cycle for protection and control types of things and everything in between in terms of the temporal and on the spatial we've got continent wide models where you're modeling the whole interconnected system and sometimes like for the National Transmission Planning we've got West and East together because we're modeling the economics of DC ties between them and so really, truly continent wide things all the way down to really needing to figure out a microgrid you're in buildings and things so it's a great question I'm not going to have a crisp answer for you but one of the things one of the strategies that we've taken in projects like Helix and Narm for example is that we don't want to necessarily try to do too much by putting all of that very widely spatial data and very disparate temporal data into one model and trying to make sense of it what we want to do is take these various tools that are really optimized for a slice of that but then build the boundary conditions so we can do co-simulation and build do that simulation framework so we can leverage each of these very individualized tools that's studying a piece of the problem but then depending on the scope of what you're trying to answer link that to the other tools that are adjacent to it and then through better modeling of those boundary conditions really enhance on that and one example of a boundary condition is between transmission and distribution historically for most of the existence of the grid the distribution system all it needs to have is voltage and frequency at the substation and then the designer and operator the distribution system will take care of all the protection controls along the feeder but there's really just they just expect to have the voltage and frequency and tolerance at the feeder and what might happen on that distribution system that could reflect back into the transmission system is something that is just very limited like some demand response programs and things like that but with the advent of more distribution on the feeders and the ability to actually have flow coming back into the transmission from the distribution you know and that might change throughout the day depending on the solar and things like that it really needs you need to have that co-simulation there to model those boundary conditions so hopefully that answered your question you know what else wants to respond to this question quickly I'd use the word decomposition not only from a formal optimization point of view but also from a sort of manual modeling point of view where you use different models in a manually decomposed way I think both are applicable here to some of the comments Jeff made I think we have another question in the audience in case we have time for it okay we have another question let's go oh there too let's rotate that's one from Princeton so it's a continuation of the previous question actually so how we are going to bring this climate modeling and the energy modeling community together in addressing this issue for example if you consider the climate community so the spatial resolution that the model is usually pretty large maybe like one month or one year resolution and as energy community we typically need for example capacity expansion model like one hour resolution and at the same time if you consider the distribution so we need like much finer spatial resolution which we do not usually get with the climate models so any insights on how we kind of bring these two communities together climate modeling and systems so distribution or transmission of any other communities together addressing this question thank you let's try to have a career answer this one and everybody has a chance to respond to one of the questions I think what I'll say really briefly is there's actually some really exciting work trying to use machine learning to downscale the outputs of climate models which are often really coarse spatial resolution to the more granular spatial resolution we need to kind of model impacts on power infrastructure in summary how it basically works is that if you run a climate model on what kind of the climate model would say happens today or in the past you take the more granular weather data about the present we have now you can learn some kind of mapping between the kind of coarse spatial output of the climate model and the finer grained weather data we have and the hope, the goal to create a climate projection in the future that that's a mapping between the coarse output that the climate model is giving you and the fine grained kind of weather implication in a particular place that that implies remain roughly consistent and there's a lot of work to be done to kind of make that better but I think that this is a really promising approach for trying to bring those two communities together okay let's move to this gentlemen I had a question about base lining of metrics so resiliency equity, decarbonization and reliability these are important metrics if one assumes that they are functions to be learned from observable data and we understand they are context dependent, community driven and depend on the level of abstraction just wanted clarification are we thinking about the functions to learn individually or are this the goal is to sort of learn the function jointly which means should the community and decarbonization be learned jointly or is there a marriage to sort of tease out the differences between them and define them individually so I think this is an important question about base lining so any clarifications on that maybe you haven't answered any questions so I think you've done research before on this topic let's so I think very naturally these things are correlated with each other reliability is about the availability of power when you need it and resiliency is your kind of capacity to recover from events now obviously certain technologies will afford a certain reliability and a certain capacity to recover and similarly I think equity then brings the interplay between policy economics, pricing, models and then technology as well and I think the other factor you mentioned was decarbonization I think that that's the overarching principle in earnest is VR on a march towards decarbonization so these analysis have to be done into that context now when you say how do you learn this jointly or separately I think there is several steps even before getting there the first one is kind of defining what is the this how do we measure these four dimensions properly and then second of course how do you how do you learn it I think it's going to depend on the scale and the combination of tools I think that's part of what we need to discover with earnest because there is no I would say clear cut standard today on how to do this wonderful let's move on and we have about three or five minutes let's do a little bit of rapid fire we still have two questions having to discuss and one is the workforce yesterday as many of you there it out my expectation is almost become a recruiting events among different government agencies so we see the huge needs more than just a resilient system but also resilient and skilled workforce I want each of you have a very quick 36 to respond what's as a research as a university or national how you respond industry and the government needs regarding the workforce for this energy that's that was this site was mom Jim and Jeff I think as a university you respond in in in three ways first the students we produce they can work in the government but they can also be excellent interns second the classes and the materials that we are creating making them open and accessible to everyone so that others can build this capability of training the workforce based on what you're doing and third and I think this is also super important is those out here at Stanford have the ability to go and advocate for resources to be put into the development of this workforce we hear this all the time that this is super essential and Stanford is not going to train every single person that does that so but we can definitely advocate for more to be put in the workforce I want to say coordinated internships is a really good idea what I mean by that is when you have a Ph.D. student three years, four years, five years depending on many factors that's enough time to have at least one internship and I would argue two if there are three or four months each then you're investing six, seven, eight months maybe a year at the most of that student's time in the internship in the universities not just one so that the individual can compare and contrast the different cultures the different people, the different networks of those organizations even better make one of them an engineering oriented kind of experience and the other one a social slash regulatory kind of experience in my mind I'm thinking if I could have my way under the faculty board under Josh Burns who spoke earlier today and then another three months at MISO and then come back to me and do the greatest Ph.D. dissertation ever I would add one more more than just Ph.D. students Massachusetts undergraduate that's what we are doing here Stanford, we send the students to ISO send students to the PUC's California Air Resource Board in Alaska in addition to those things our industry will come explicitly to the university to get education on these topics and I think that there can be a lot done to make sure that entities for example working in needing to decarbonize energy systems but also entities that have less financial resources to pay into these systems traditionally I think there's a lot that can be done to try to expand the reach of these programs to those kinds of entities so PN analogies like all the other national labs has a pretty strong internship program in fact I started with that route and then we also have a partnership with the Washington State University called the Advanced Grid Institute and part of that is very heavily focused on workforce development we have something called the Districted Graduate Research Program where in that case it's Ph.D. students would start off their first two years at the university and their second two years at the national lab for a four year program and then for on the job training for people in the utility industry a lot of my projects over the years have had training and workforce development as part of it so for example in NASPE we've done some training programs for system operators to better understand how to use this white area time synchronized measurement for better operational decision making and enhanced situational awareness so do tutorials and training things that are focused on utility people wonderful last but the most important goal for earnest is really how to engage underrepresented communities we have been discussing this for the whole day starting from the first panel today so I want to give each of you like 10 seconds very quick closing remark share your thoughts how to engage underrepresented communities let's start from Jeff and go back to Ram yeah these pilot projects that were briefed today I don't even know if that's the whole list there's a lot of pilot projects part of earnest really fantastic opportunities that were engagement of these communities is very central and key and I also think this is something that's pervasive across many other DOE programs in terms of real emphasis on that in terms of really engaging the broader stakeholder community including disadvantaged communities yeah I think we definitely have a huge responsibility as a consortium to kind of view both studied expertise and lived expertise as first class citizens and really make sure that they both are brought to the table in the way we're conducting these projects and I think this is a really huge focus in the way we're going to try to move forward on these projects I don't know the word but I'm thinking of the following don't laugh at me techno economic environmental social engineering I don't know if I got everything there but you know you listen to us we have dedicated engineering processes and methods and we're talking about the community and what is the community exactly underrepresented etc right it's big it's huge it's large quick story I was at a MISO meeting recently with 346 participants they call them stakeholders not MISO folks in the MISO there's five MISO engineers that presenting the essence of their long range transmission planning results they do this every six weeks for three hours I just want to make the case that we're just beginning to learn how to do that we got a long way to go on stakeholder engagement I think in several levels first of all recruiting students who work on earnest who are from underrepresented communities I think that's kind of a duty that we have if you want to address this issue second making sure that our students when they focus on these problems actually work with the community actually engage going beyond the campus boundaries actively being involved and third I also think there's a lot of output from earnest that will be able to influence policy and I think that's another avenue to address these inequity issues with that I would like to conclude this session and what it's like thank Jeff, Pariah, Jim and Ram share with us wonderful work have been done and also look forward what we're going to do for the earnest appreciate it