 Good afternoon and welcome to today's energy seminar. Isn't it great to have a microphone here, especially with the mask on? Today, we have a real treat. This, I think, is our third annual exposition of the Stanford Energy Student Lecture Honorees. Here, as I can figure, this is a deal where you work really hard in your lab for three or four years. You do exceptional cutting-edge research, and then your advisor nominates you to be included in the competition for the best student technology lectures. And that goes on all summer. I see Richard Sassoon and Maxine Lim, who run it here. And it's kind of like the Olympics. And I'm actually going to let Richard Sassoon. No pressure, you guys. I'm going to introduce my dear friend and colleague, Richard Sassoon, who is currently Dr. Richard Sassoon, who's currently the executive director of the Stanford Energy Alliance. And when I met him, 2003, I see was the managing director of the Global Climate and Energy Project. Please don't ask him for any embarrassing stories about our travel to cutting-edge energy laboratories around the real world. He just told me that he's written this up or was interviewed about this recently. So I'm not interested in seeing that. So without further ado, Richard is a renowned scientist in his own right, a PhD in physical and analytic chemistry, and probably the most knowledgeable person about all aspects of a broad range of energy technologies. I know so Richard's going to introduce the panel and moderate their presentations, which I'm really looking forward to hearing from Richard. Thank you, John. I won't embarrass you today, don't worry. So welcome to this session where we showcase some of our top student researchers. This year, we're delighted to have four speakers who have been honored as 2021 Stanford Energy Distinguished Student Lecturers. So as many of you may already know, we hold this annual Stanford Energy Student Lecture Series every summer. And this year represents the 11th time that we've conducted this program. The goal of the program is really twofold. One is to help students better communicate, key takeaway messages about their energy-related research to a broad technical audience, and then two is to showcase some of the cutting-edge energy research that students are conducting right here on campus. So over the summer, we had 14 Stanford students give talks, and then a judging panel selected the top four. And we'll hear from them today. Before introducing them, let me just first thank the organizers of this program. Yufei Yang was the Seminar Manager. And she was helped by a number of student representatives. And then we have Maxine Leng, who has been doing this for all 11 years. And she helped coordinate everything. And as usual, everything went very smoothly. And then the judging panel was Steve Eglash, Jenny Milne, Michael McKayla, who joined me. So let me now go ahead and introduce all four presenters. They'll each give their talks one after another. And since one has a sort of tight schedule, we'll hold off on the Q&A until after the last talk. And then you can address questions to all four speakers. So our first speaker will be Peter Chanika, who is a six-year PhD student in material science, working with Professor Will Chu. His work focuses on understanding the atomic and electronic structure, the positive electrode materials for the lithium-ion batteries. And next up, we'll have Emily Lacroix, who is a fifth-year PhD candidate in Earth System Science. She's advised by Professor Scott Fendorf. And the results of her research should help inform land management decisions to increase soil carbon storage. Then we'll have Julian Bihill. He's a fourth-year PhD candidate in chemical engineering, working with Professor Hema Karunadasa and with Mike Tony. His research focuses on halide prop skyte semiconductors that could lead to more energy efficient writing and solar energy conversion applications. And then last but not least is Lily Buchler, who's a fifth-year PhD candidate in mechanical engineering. She's advised by Professor Van Root-Jagapal. Her research interests include data-driven control, optimization, and simulation in different power system applications. So without further ado, I'll ask Peter to give the first talk. Hello, everyone, and thanks for the introduction. I hope we can live up to these lofty expectations. But anyway, so my name is Peter. And today, I'm going to talk to you about battery degradation through cycling-induced oxygen release. So first, I wanted to introduce how a battery actually works. So in general, a battery consists of a mobile ion, in our case lithium, which will be shuttled back and forth between a low-energy reservoir and a high-energy reservoir. And in general, the amount of energy that we can extract from this battery is the product of two different things. The first is the capacity, which refers to the number of lithium ions, which will move back and forth in a given cycle. And the second is the voltage. And this is related to the energy that we can extract per lithium ion that moves. Now in my talk today, I'm only going to focus on one component of this battery, which is the positive electrode. And if we take a look at what the structure of the positive electrode material that I've been working with looks like, you'll see that this is a layered structure with layers of oxygen, lithium, and transition metals. And you'll notice that some of the transition metals are replaced with lithium atoms. And this is why it's known as a lithium rich positive electrode. Now one property of this material that I'm going to focus on today is the transition metal oxidation state, or the average transition metal oxidation state. So we can see here that for this material, it's about 3.4 plus. But once we charge the material, we have to remove lithium from the structure. And then the situation becomes a little bit different. The structure now will look something like this. And there are many things that change once we remove such a dramatic amount of lithium from the structure. But again, I want to focus for now on the transition metal oxidation state, which has now been pushed significantly higher to about 4.6 plus. In general, this can be a good thing. Higher oxidation states do tend to give higher voltages for batteries. So this would enable us to extract more energy. But they also have a potential disadvantage as well. So you might imagine that there's no guarantee that this structure is still going to be stable once we've removed such a large amount of lithium from it. And in doing so, we've gotten such a high transition metal oxidation state. And so the degradation I'll be talking about today has to do with oxygen release, which can lower that transition metal oxidation state down to a more reasonable value. So if we take a look, or we know that this structure actually is not perfectly stable over time, if we look at the electrochemical performance. So here, what we're seeing is that as we cycle this battery over 250 cycles, the voltage on both charge and charge is going from bottom left to top right and discharge going from top left to bottom right, the voltage during both charge and discharge is decreasing over cycling. And what this means practically is that we have less energy that we're going to be able to extract from this battery time goes on as we continue to cycle. You'll also notice that the x-intercept is not changing that much. So the capacity, the number of lithium that are actually going in and out of the structure in a given cycle is actually almost constant. It only decreases by a couple percent or so over 250 cycles. And I'll come back to this point a little bit later. So the question that was really motivating us to start this research is to try to answer the question of what's causing the voltage to drop over time so that we can design strategies to prevent that voltage decay and enable the batteries to maintain a larger proportion of energy for if you're using an electric vehicle, you might want to use the same battery for 5, 8, 10 years. So we really need to keep a stable voltage for much longer than just 250 cycles. So as I've mentioned, I'm going to focus a lot today on the transition metal oxidation state. So how can we actually measure this? One way we can do this is with a technique known as X-ray absorption spectroscopy. And in this technique, we can come in with an X-ray of varying energy on our material. And if the energy of this X-ray is exactly equal to the energy difference between an occupied and unoccupied state, then the material can absorb that X-ray and promote an electron to a higher energy state. We can then change the energy of the incoming X-ray and record how many are absorbed as a function of energy. And in doing so, we can get an X-ray absorption spectrum. Now as it turns out, the X-ray absorption spectrum is highly sensitive to the transition metal oxidation state. And we can see that here for several manganese-based reference compounds with different manganese oxidation states. So in general, we can see from the graph on the right that as the oxidation state increases or goes higher, then we have a higher absorption energy in the X-ray absorption spectrum. And this is basically due to the fact that the core electrons are held on to more tightly if you have a higher oxidation state. So what happens if we look at our own material? So if we look at our own material as a function of cycle number, we can sort of see the opposite happening. So we have the pristine material in black there. And then as we continue to cycle it, it's actually shifting to lower energy, indicating that the oxidation state of the transition metal is dropping. So here this is for manganese, meaning we're exciting from a manganese 1S orbital. But the same thing happens if we look at the cobalt spectra. So there are three transition metals in this compound, manganese, cobalt, and nickel. And nickel is about the same. This is just due to the particular electronic structure of this material, which I won't get into today. But in general, we can see that the average transition metal oxidation state is dropping. Often in these materials, oxidation state changes are associated with changes in the lithium content. But we have measured the lithium content of these materials directly, overcycling. And you'll notice that first, there's not much of a change in the lithium content. But secondly, it's actually going in the wrong direction to explain the spectroscopic changes that we see. Usually a decrease in lithium would be associated with an oxidation or increase of the transition metal oxidation state. But here we're seeing a small drop in lithium content and also a drop in the oxidation state. So we believe that this is not due to change in lithium content, but it's actually due to oxygen, which has been released from the material very, very slowly over hundreds and hundreds of cycles. And this gives us a good explanation for why the voltage is dropping as well, because lower oxidation states of the transition metals are associated with lower voltages for the battery. So to dig a little bit more deeply into this phenomenon in order to really develop strategies to prevent it from happening, the next thing I'll talk about is using the same principle of X-ray absorption spectroscopy, but now adding microscopy into the picture. So here are some results from a technique known as X-ray tycography, which I won't be talking about today. And what you can see is that now we've spatially resolved the transition metal oxidation state within individual particles from this material. So in the pristine material on the left, we have nearly a four plus oxidation state everywhere, except for maybe right at the very surface. But after even just 125 cycles, we have a significant amount of transition metal reduction even inside the bulk of these particles, even 100 nanometers or so from the surface. And we can see that even more clearly if we condense all this data into the plot on the left. So this means that this oxygen originating in the bulk of the particles does eventually get released over hundreds and hundreds of electrochemical cycles. So with this spectroscopy results in mind, and given that we found the material to be still a single phase, there are basically two potential atomic structures that could explain this. The first is this oxygen vacancy structure. This is the same as the pristine material, but now we've just simply removed some of the oxygen atoms and replaced them with vacancies. Another possibility is this densified structure. And this forms by losing oxygen and lithium kind of simultaneously and getting a structure with a higher transition metal content. Both of these can explain the drop in transition metal oxidation state. But the biggest difference is really the lithium content, because you do have to lose such a large amount of lithium to form this densified structure. And as you may remember, so we did measure the lithium content directly. And also electrochemically, we saw that the capacity or the number of lithium moving in and out was nearly the same over cycling. And so we're able to conclude from this that the oxygen vacancy structure is dominant. And so basically what's happening over many cycles is that oxygen is diffusing very slowly from the bulk with a kind of vacancy diffusion mechanism and eventually gets released at the surface. So just to briefly mention some possible mitigation strategies for this. So if we think about our oxide particle, there are a few different areas that we can look at to try to prevent this. Kind of the most common thing that people have looked at is coating the material with something that would block oxygen but still allow lithium through. This turns out to be pretty difficult. We have obtained some coating materials from collaborators that seem to show the same amount of oxygen loss eventually. So it's a challenging problem, but I do think it's potentially possible to do this. Since oxygen release is generally triggered by these very high transition metal oxidation states, you can imagine having a lower oxidation state right at the surface where the process kind of starts as a potential way to prevent this. And then finally, you can imagine if we can lower the oxygen diffusivity within the bulk, then that's another possibility as well. And one way to potentially do this is by stopping cation disordering. This has been suggested in sodium layered materials and you can ask more about this in the Q&A if you're interested. So to sort of wrap up here, the main conclusions are that oxygen leaves the bulk material during cycling and this results in the presence of oxygen vacancies and therefore preventing this bulk diffusion and release of oxygen is gonna be very important for stabilizing the voltage over hundreds of cycles. And with that, I'd like to acknowledge my research group which is Will Choose Group in Material Science. This is also a collaborative effort with people at both Slack and Lawrence Berkeley National Lab as well as Samsung. So thank you. So hi, I'm Emily LaCroix. Thanks again for being here and I'm really excited to talk to you about my work, studying soils as a form of soil carbon storage. So soils are actually the largest fast cycling carbon pool. They store over 2,300 gigatons of carbon which is more carbon than all of the plants on earth in the atmosphere combined. And the size of the soil carbon pool is regulated by inputs and outputs. So the main input to soil carbon is actually plants. So plants bring in carbon dioxide through photosynthesis to build their plant parts and those plant parts eventually end up in the soil. And then balancing those inputs, we have microbial respiration. So there's microorganisms living in the soil that use the soil carbon to drive energy during respiration and that releases carbon dioxide. So in thinking about these balance of inputs and outputs, you could imagine if the inputs to soil carbon were greater than the outputs, soils can act as a carbon sink. And this seems like a really amazing strategy for mitigating climate change, except there's one big caveat, which is that the controls on carbon dioxide emissions from soils are really poorly understood. Getting more carbon into soils is pretty straightforward, grow more plants, but how do we know that that newly input carbon isn't just gonna be turned straight back into CO2? So this has been the subject of a lot of soil science research for multiple decades. And through that research, we know that one of the primary controls on this soil carbon outputs are the role of minerals. So in soils, you have mineral surfaces and carbon can become adsorbed to the mineral surfaces and it's essentially unavailable for microbial respiration. You might be thinking we could just change the minerals of all the soil on earth, it's not really practical. So first, minerals have a finite surface area, so there's only so much carbon that can stick to minerals. And secondly, it's just really impractical to change the mineral composition of a soil. And so this leaves us with this really big question, which is how do we manage agricultural soils to be carbon sinks? And you might notice I skipped straight to agricultural, it's because in the contiguous United States, over 50% of the land is classified as crop or range land, meaning that there's already the infrastructure in place to monitor and manage these soils. So it's a good place to start for natural climate solutions. And this brings me to the topic of our group's work, which is anoxic microsites and their potential as a mechanism for soil carbon storage. An anoxic microsite is just a non-majority soil volume in which oxygen supply is slower than microbial demand for oxygen. So in other words, in an otherwise well aerated soil, you'll have these pockets where there's strong oxygen demand for microbes and oxygen diffusion can't keep up with that demand. So you're left with a pocket that's without oxygen and those pockets are called microsites. And so within anoxic microsites, microbial respiration of soil carbon is slowed by approximately 90% on a per volume basis. They can serve as trace sources of nitrous oxide and methane to the atmosphere, so it's not totally clear cut. And perhaps most pertinently, they're still really poorly understood and thus represent a really big opportunity but also a vulnerability for soil carbon storage. And so the goal of our group's work is to determine the contribution of anoxic microsites to soil carbon storage across different soil properties and management practices. And today I'm just gonna talk to you about a study that looks at the influence of texture and also climate, or in this case, moisture on anoxic microsites. And before I dive in, I needed to put in a quick soil physics lecture which is that soil oxygen supply is slower and fine textured in wet soils. So in the left panel, you can see for texture, if you imagine that an oxygen molecule needs to diffuse a net distance. In a finer textured soil, the diffusion path length is a lot longer. So oxygen supply tends to be hampered or inhibited in fine textured soils. In terms of moisture, oxygen diffuses 10,000 times more slowly through water than it does through gas or air filled pore space. So whenever oxygen encounters a water filled pore or is trying to diffuse into a waterlogged soil, there's gonna be really poor oxygen supply. And so because of these paradigms, we hypothesize that the contribution of anoxic microsites to soil carbon storage would be greatest and finer in wetter soils. To test this hypothesis, we actually collected soils from the Stanford dish. So we collected soils from three different textures along a hill slope. And we applied two different moisture treatments to the course. To measure the contribution of anoxic microsites to soil carbon storage, we've relied really heavily on this framework, which is this idea that if a soil has anoxic microsites that are contributing to soil carbon storage, if you were all of a sudden to aerate that soil, there should be some sort of increase in the steady state CO2-F flux after that aeration event. And that increase in CO2-F flux should be commensurate a representative of the carbon stabilized by anoxic microsites. And so we applied two different aeration treatments in the lab. The first was we incubated soils under a regular atmosphere. So like the air we breathe, which is 21% oxygen. And then another subset we incubated under a 32% oxygen atmosphere. And what we found was that the extra oxygen actually increased CO2 emissions from sandy loam soils, which are the coarsest soils that we sampled. And to orient you a bit on this plot, because we'll see a similar one to it, each panel represents a different texture of soil. So finest is on the left and coarsest is on the right. The X positions are moisture. And then the black bar is gonna represent kind of the control CO2-F flux from the soil. And the red bar will represent the CO2-F flux from the more oxygenated treatment. And so you can see there's not really a big difference between the black and red bars, except for in the sandy loam soil. And the effect seems to be a little bit more pronounced in the wetter treatment. The second aeration technique that we applied was physical disturbance. So we took a soil and we disaggregated it. So we broke it out of its core, put it through a sieve and spread it in a broad area pan to aerate it as best we could. And this is also meant to sort of simulate tillage, which is very regular practice in croplands. And what we found was that physical disturbance increased CO2 emissions from both the loam and the sandy loam soils, but there was no effect in the finest textured soil. The effect seems to be a little bit greater in the wetter soils, but we need more replicates to be sure about the influence of moisture. So to revisit our hypothesis, which was anoxic microsites would matter most in the finest soils, we weren't right. So our hypothesis is wrong. And this has sort of led us to a new evolving hypothesis, which is the role of microbial oxygen demand. So if you think back to three minutes ago when I taught you about mineral protection, if you imagine a fine textured soil versus a coarse textured soil, a fine soil has a lot more surface area of minerals for carbon to stick to. And then a coarse soil on the other hand has a lot less mineral surface area. So as a result in the coarser soils, a greater proportion of the carbon is free and available for microbial respiration. And so as microbes are trying to respire that carbon aerobically, it creates a big demand for oxygen. And so our thought is maybe the oxygen demand is actually driving the formation of the anoxic microsites. And so diffusion just can't keep up with that microbial demand. And so some conclusions from this work. Anoxic microsites may be most useful for leveraging carbon storage in soils with low mineral protection capacity, so coarse soils. A disturbance should be avoided in sandy soils to maximize soil carbon storage. And our next steps are to apply this methodology across the US corn and cotton belts. So I spent this past summer sampling at a few long-term tillage experiments across the US and I can talk about that more in the Q and A. And with that, I want to thank all my funding sources and my wonderful lab group and the energy seminar for hosting us today. Thank you. My name is Julian and I am in the chemical engineering and chemistry departments and I'm going to talk today about our work on understanding and trying to manipulate some of the defects that we see in halide perovskite semiconductors. And this is where I'm very glad I'm following Peter because he gave a great introduction of electronic structure and defects. The materials that we're interested in, in particular, are semiconductors and these really fall somewhere between metals and insulators in terms of electronic structure. So we can think about the occupied states and unoccupied states of material relative to energy. So a semiconductor will have something that has somewhat of an intermediate band gap between these two and will have some interesting optical and usually electronic properties as well. So we can imagine now in the energy landscape where these type of materials come in are in a few different places. The one that we probably all know very well is in photovoltaic devices. So in this case, we want to absorb a lot of photons from the sun and then we'll try and separate the electron in the hole to get some electrical power from our device. So here we want to have a low band gap, absorb a lot of light and then separate our carriers. You can also imagine basically the exact opposite situation where we provide energy to overcome the band gap energy and then what we really want to do is collect this light that comes out or tune the properties and tune the emission from the material. So here we'll have a high band gap energy and then focus on the emission properties of our material. But to go back to solar absorbers, that's really kind of what we'll focus on today in terms of the materials and the applications. This is the architecture of a very efficient silicon solar cell. And if we zoom in in particular on the absorber layer, so this is the silicon, you might imagine this perfect silicon lattice where we have the same atomic structure extending in all three dimensions. However, we can have defects that occur in these materials. So if we now substitute one of the silicon atoms for titanium, we can really, really greatly affect the efficiency of the solar cell. So this can reduce the efficiency by more than 50%, even at super low concentrations. So considering the defects in these materials that are usually thought to be perfect is very important for real world applications. And so now where we come in, we study these halide perovskite semiconductors, which is this crystal structure here where you have these metal halide octahetra extending in three dimensions. And really they're very old materials. So you can see they've been studied since the 1890s, but not really implemented into solar cells and optoelectronic devices until about 2010. So you can see now, there's a lot of excitement about these materials because it really just gained a lot of efficiency in the last 10 years essentially. So now we're trying to understand a little bit better what makes these materials work, what makes them degrade a lot of different aspects. So one benefit is that they're very easy to synthesize, so we can make them near room temperature and we can also manufacture them into thin films for devices very easily. But one of the downsides to the fact that they're easy to make, they're also very easy to degrade. So we can imagine now, if I take a slice out of the crystal structure and we look at an atom here that's missing, this is a vacancy, just like we discussed before in the battery materials, now you can get free movement of the halide through the material. And so this can actually be a bad thing. So you can imagine degradation processes and other polarization things that occur in devices based on the fact that you have all these ions moving around. And this is now where we come in, we're in organic chemists. So we tend to study the bulk properties of these crystals. And my two colleagues, Adam and Nate, we're among the first to discover some of these defect reactions and then characterize a lot about the thermodynamics and the transport of these vacancies through materials. So now this is a double perovskite crystal. So we have two metals rather than one, but it's the same structure. And actually if you just leave a crystal like this out to sit, it will lose bromine over time. So it's a bromide crystal, you're losing the bromide gas. And that actually contributes to the electronic structure. So we see two electrons go in and then we can see the conductivity increasing significantly over time. And then what's cool is that you can now expose it to bromine and it will go back to the initial state. So we know now that this is an equilibrium where we have the perfect bromide lattice, which is an equilibrium with a vacancy, the halogen gas and two electrons. So we can manipulate this equilibrium essentially. And now where I came in, we've been studying this other material which has a very similar structure, cesium-104 iodide. We can grow these large beautiful crystals as you can see in the image there. And we've been studying a lot their conductivity and some scattering methods to characterize the structure as well. And we essentially see the same type of behavior where the conductivity is increasing over time, which is indicative of this halogen exchange reaction. And one thing you'll notice is that the shape of this curve is really nice compared to the last crystal and it actually gives us an indication that there's a diffusion limited process going on. And so we've gone into the detail now of this material and the transport properties. So we can repeat this measurement at three different temperatures or even more different temperatures. And we can apply a diffusion model to it as well. So we're modeling the movement of the vacancy from the surface of the crystal into the bulk of the crystal. And what we get out of that are some kinetic parameters so we can now understand what the diffusion coefficient is, how fast the vacancy is moving through the crystal. And also the activation energy. So actually how much energy it takes to migrate the vacancy through the crystal. So now my PhD has actually focused a lot on seeing these defects and really understanding the structure as well as the electronic structure. And so we can have a real space model of what this material looks like with these metal halide octahedral, like I mentioned. And then if you imagine doing an x-ray scattering measurement, so you take a crystal and you hit it with x-rays, what do you expect to see in the x-ray scattering pattern, the diffraction profile? So you can see with a perfect material, you see a perfectly spherical symmetric Bragg peak. However, if you start to generate a lot of defects in the material, now you have strained fields in the material and this will give rise to some asymmetry in your diffraction peak. So now we can see the shape is changing and all arbitrarily just adds more defects here. And we can see that in the extreme case where we have a lot of defects, now we have a very different profile in our x-ray scattering profile. So now I can go look at this in the lab and see if we can characterize this type of behavior. So we've done that a little bit. We've started to get some initial measurements. One of those that we can do in real space is to actually image this using electron microscopy. So now we're looking in real space at the structure and if we can push down to the atomic resolution, we should be able to see some of these vacancies in defects forming. And then also doing the x-ray scattering measurement. So this is what I was talking about in terms of the simulated patterns. We can start to see initial results that show this asymmetry in the x-ray scattering profile which gives us some indication that there are defects in the structure. So now we've characterized in the Heli double perovskites pretty extensively that there's this defect reaction occurring. But of course we also wanna motivate some changes that we can make to the crystal structure that could potentially disfavor this reaction or stabilize the material in a device. One of those ways is by doping. So actually on purpose implementing some dopants or defects into the material that will actually offset the effect that we see. So really what we wanna see is this electron concentration just a flat line with respect to the x-axis here. However, we see this very slopey line that's basically following the vacancy concentration. So this is not good. However, if we incorporate some two plus dopant into the material at a higher concentration now we can somewhat fix or stabilize the material. So now we see a flatter line with respect to that electron concentration. And then going even further beyond the double perovskites we really wanna show in some of these lead halides that actually show up in high efficiency devices that the same kind of occurring the same kind of process could be occurring. So establish how general this halogen exchange reaction is across the whole family of materials. So with that I'll just wrap up by summarizing to say that we really characterized this halogen exchange mechanism pretty extensively. And we think that we need to have some sort of atomic level or chemical solution at the local level that will disfavor this reaction in order to really stabilize the device and then the module because you can think about encapsulants and some other techniques but really we're talking about very small molecules at the atomic scale. So we think with the chemical solution or atomic level solution we can really stabilize these materials. And with that I'll just wrap up by thanking my advisors, Hema and Mike, the three folks who are most involved in this work with me all the group members, funding sources and some of the collaborators on the structural measurements. Thank you. All right, great. Hi everyone, my name is Lily Buchler and I'm gonna be talking about my work on learning accelerated power flow simulations. So I wanna first motivate this work by talking about how the power grid works and how it's changing. So traditionally power systems have been centralized with power generated by large scale power plants. And that power is distributed to consumers through transmission and distribution systems. But as the costs of renewables and energy storage fall we're seeing more and more of those resources installed both at the utility scale and as distributed energy resources in both residential and commercial applications. And so as these changes occur where utilities are going to need more and more need better tools to analyze how these changes affect their system, right? So one of the main analytical tools that we use is called power flow simulation. Power flow simulation essentially allows us to analyze how power generation and demand affects voltages in a network. So voltages are important because most appliances and equipment are rated to operate at a specific voltage level. So for example in your home, most of your appliances are rated to operate at either 120 or 240 volts. And so utilities use control systems in order to make sure those voltages are close to those nominal levels. And so power flow simulation essentially allows us to model this relationship between power and voltage. Normally we think about power in terms of its real and reactive components, the P and Q. And we talk about voltage in terms of its magnitude and phase angle, V and theta. So mathematically, power flow simulation involves solving the so-called power flow equations. The power flow equations are a nonlinear system of equations where power is defined explicitly in terms of voltage. But normally for simulation, we actually want the opposite mapping. We have the power injections at most of the nodes in a system and we wanna calculate voltage. But because of how these equations are structured, we can't analytically invert them. And so we have to use iterative numerical methods in order to solve them. And so these numerical methods like Newton-Raphson or fixed point iteration have been applied to this problem for many, many decades and have been highly studied and optimized. And for a single calculation, they are very efficient. But normally for analysis, we want to do these types of calculations as part of a time series simulation that involves lots and lots of power flow calculations. So at each time step, for example, we have some inputs to our power flow problem. We use a power flow solver, which is a numerical method in order to solve that system of equations and we get a solution. And we repeat this for every time step. And so this type of setup is often called quasi-static power flow simulation and is one particular flavor of power flow simulation and is often used for steady state type analysis. So that's things that happen at the time scale of say minutes to hours. And so there's a variety of different available simulators out there that do this type of analysis. And you can imagine if you, for example, have a really long time horizon or you want to model uncertainty in some variable in your system and need to run a lot of simulations, you have to do a lot of power flow calculations and that can be computationally expensive. So there's a variety of ways to speed up these calculations. For example, people often try to decouple the system of equations or use sparse methods for matrix decomposition. And those are used pretty heavily in this research field. Another way is to simplify the power flow equations by, for example, using a linear model, which is easier to solve. A more recent approach is to use machine learning to try to do faster simulation. So for example, you can run a power flow solver for a certain number of time steps, train a model to predict the solution from the inputs and use that to completely bypass the traditional power flow solver. This speeds up simulation because generally use evaluating an explicit function is much faster than running a numerical method. And so this has been a really hot topic the last couple of years in this research space. And there's been lots of papers looking at what specific form this data for mapping should take and has been shown to speed up simulations considerably. But there's a number of challenges when it comes to actually implementing these methods in the types of simulators that utilities actually use. So one challenge is that previous studies often make a lot of simplifying assumptions about the power flow, which makes it pretty much impossible to plug these methods into the simulators that utilities actually use. Another challenge is about generalizability. So a lot of these data-driven models are trained offline on datasets and then you assume that training data and testing data come from the same distribution and that you can just apply it to a test set. But often that's not the case because loading conditions can change, network topology can change and that assumption doesn't always hold. Another challenge is computation time of some of these data-driven methods. So often some of these, for example, deep learning based methods, they're very accurate. You can get very fast predictions but training them takes a lot of time and that can outweigh any costs, any benefits from doing fast prediction. And finally, we found that the accuracy of a lot of these methods highly depends on hyperparameter tuning, which means they're not really as robust and applicable for use for people who don't have an ML background. So in our work, we developed a different approach that tries to address some of these challenges and we implemented it in the GridLab D Power Flow Simulation Engine, which is a popular tool used by a lot of utilities and national lab researchers. So instead of completely trying to replace the Power Flow Solver with an ML model, we use ML to kind of augment or accelerate the solver. So instead of training data-driven model offline to predict the Power Flow solution, we actually update it online during the simulation and that way it can adapt to changing input conditions and helps with generalizability. We also selectively decide when we want to use our approximation and when to fall back on the traditional Power Flow solver. And so this approach works best when the inputs to the Power Flow problem don't change that considerably from one time step to another and we're able to avoid doing redundant computations and also learn from previous solutions to inform our prediction. So this framework both speeds up simulation by avoid using the more computationally expensive Power Flow solver as much as possible and also by seeding the Power Flow solver with an approximate data-driven solution so that it converges more quickly. And so we implemented this approach in the GridLabD simulation engine and tested it on a variety of distribution systems and GridLabD uses what's called Newton-Raphson-based method that's based on the current injection method. It's one standard type of Power Flow solver and it uses sparse methods for matrix decomposition so it's already fairly fast. But with our current implementation, we're seeing a speed of about three to five times the fastest that that numerical method can do and with a faster implementation, with a better implementation, we think that will be around five to eight times faster. So but with those computational benefits, there is a trade-off with accuracy but the errors that we've observed are on the order of one E minus five to one E minus three per unit voltage which is still very acceptable for a lot of applications. So kind of in conclusion, we found that combining more traditional simulation methods with data-driven prediction is a promising approach for speeding up Power Flow simulations and in the future, we have more and more controllable resources in our power systems and a more distributed Grid, it will be more, it will be increasingly important to have tools to do both accurate and fast simulations. So we plan to release this code open source as part of the Slack version of Grid Lab D in the future so that it's accessible to both researchers and utilities who utilize these tools. So finally, I'd like to acknowledge my collaborators, David Chassan at Slack, my advisor professor, Ron Rajagopal and Audie Tom and Siobhan who are other students who've worked on this project as well as the California Energy Commission for funding this work. And thanks for your attention and happy to answer any questions later on. Wow, those are four incredible talks. I find myself kind of daydreaming in five years while I see this person receiving an award, maybe a Nobel Prize type work or we're at Stanford, will they or their students do a startup that starts as a small company and then becomes the next Google. I think it's actually quite possible. So we do have a few minutes for questions. Anybody, actually I should say Marlies, our CA has put the attendance sheets right outside the back and the front door. So please sign them, not the guest speakers but the students registered in the class. So any questions? I'm sure we have some aspiring technologists in the audience. Richard, you wanna ask one? I could ask one or vaccine just to get them all away. These are all four excellent talks. And I think maybe to Julian I'll ask a question. And that is, you left us hanging a bit. You told us the mechanism but now what's the ways in which you can, you can, what you see in the future is your work to try and address the issue of this. Thank you Richard. Yeah, thanks Richard for the question. So this kind of relates I think to the fact that we've now characterized the instability quite well but maybe don't have the answers in terms of stabilizing the structure in all cases. But the dopant I think is the best strategy that we can come up with so far and one that we're actively pursuing to stabilize the material. So this is kind of what I discussed before where we have an electron concentration that's moving around quite a bit in the left panel of this figure. And then if we incorporate a dopant because it's also positively charged, that's the key thing. The vacancy is a positively charged defect and so what we wanna do is dope with a two plus metal to replace the one plus metal in the structure. We think that that will, if we get it in at high enough concentration to replace the original one plus metal that it will stabilize that electron concentration and that's really the key thing towards stabilizing the material in a device stack for example. So I think the compositional tuning in the dopant engineering is the best way. Thank you. Thank you. My question is for Lily. I'm curious if you also measured the amount of cost improvement there was in addition to the speed improvement and a follow-up question on that is where do you see some of the biggest applications? Do you think this will kind of be used in energy trading or like where do you think these, yeah. Yeah, so by cost do you mean like money cost or, yeah, so often we talk about cost using a different tool that's called Optimal Power Flow which is a little bit different than Power Flow. It's the tool that we use to dispatch resources and power systems. And so in that type of tool we think about cost in terms of what resources should we dispatch to minimize cost. So that's also another tool where ML has been used to accelerate analysis. So in Power Flow simulation we don't really think about cost directly. You can derive cost from it. But it's not necessarily as relevant. And sorry, what was your second question? Yeah, so where Power Flow simulation is most computationally expensive is for large systems and for distribution systems. So for transmission systems you can frequently make simplifications to the Power Flow equations which for example make them linear and much faster to solve. In distribution systems you normally have to use a more complicated model and you have to use these numerical methods to solve them. So it's really distribution systems with a lot of resources, for example, a lot of controllable DERs, energy storage, people have in their homes where it's gonna be more useful and we're gonna see that increasingly in the future. I had a question for Julian. I was wondering if you could put into a bit more context how stability is important here. Is there some industry standard that you're shooting for? Are there some, I don't know, year long tests that you can do on a perovskite photovoltaic to determine some final metric? Thank you for the question, yeah. So like I mentioned we are pretty fundamental in organic chemists so we don't make devices ourselves but we certainly talk with a lot of folks who study devices and worry a lot more about the stability but I think a good place to start to discuss this is probably looking at the efficiency scales. So these efficiencies are for very small solar cells and typically not stable enough to be actually implemented into a module or something larger than that. And so there are definitely industry standards and certain accelerated degradation tests that folks can do on devices. But yeah, so far we haven't actually gone into the device world yet personally but we focus on the crystals and then try and basically derive as many insights as we can about the fundamental structure to kind of collaborate with people who actually make devices. But in general I think the key aspect in terms of the stability more broadly speaking is just the fact that you have a lot of these low energy processes happening in the crystal so you can move ions through very low activation barriers. And like I mentioned that gives rise to a lot of degradation and some basic polarization meaning you can apply a potential to your device and the internal gradient of ions will actually oppose that field. So there are a lot of unnecessary undesirable effects that will occur and we kind of collaborate more so than do the device engineering ourselves. So I have a question for Lily. As wondering sort of you mentioned the robustness is sort of a big problem, right? Traditional deep learning methods. How does sort of accelerated learning help with that? Is it that it's a pretty small time scales or you sort of just using areas where you're stable but so that's not a problem. So I missed the first part of your question. Just about how does the accelerated learning methodology like enhance the robustness compared to traditional deep learning methods? Oh yeah, so I think the biggest difference between our method and previous methods is training offline versus training online. When you train offline, you basically need to predict what your distribution of data you're gonna see online and that can be hard to do. If you have a distribution system with tens of thousands of nodes and switches where the topology can change, you don't know what topology states are gonna see pre-simulation and so it's kind of intractable to be able to train a reliable model offline. And so for our approach, we basically, we train a model online and assume that you're simulating at a fast enough resolution so that your state doesn't change a whole lot from one time step to another and if it does, you just fall back on the traditional solver and accumulate enough solutions until you do have a good model. So by kind of putting the learning in the loop with the traditional solver, instead of just like by itself adds more robustness. I had a question for the third presenter. Just with the map that you showed with the different sites that you're gonna go to for the soil samples throughout the US where there are certain factors that influence your particular decisions for specific sites and if so, like what were those factors? Yes, that's a great question. They're really glamorous locations. So we did have some reasoning behind our choices. So the first was that it's a partnership with Soil Health Institute. So they have a series of site partners that they already have an existing project with that we had a big list of sites to choose from. And then from there, we were really interested in five key variables. So climate, texture, tillage, mineralogy. So I guess that's four, but climate is moisture and temperature. And so we looked for sites that spanned essentially a gradient in each of those variables. So we ended up with a matrix of sites where we could compare three sites from three different temperatures or three different precipitation regimes. Yeah, does that answer your question? Yeah, thank you. Thank you all for presenting. That was a really interesting and fascinating presentations. I had a question about battery cycling time from the beginning. So absence of any change, what should we do to optimize our battery life if you own an electric vehicle or even a laptop or a cell phone? What's kind of like best practices for that? Right, yeah, thanks for the question. So I guess there's maybe a couple of things I'll say for this. One is that the particular material we were looking at is sort of like a next generation kind of material. So that is one that is potentially cheaper and could have higher energy density, but it's not currently in use. It's actually not totally clear like how much of this oxygen release, like slow diffusion is a problem in the materials people use today. I think personally, I think it is a little bit of a problem, but most people think it's not. So I don't know, we'll see. But yeah, I guess one thing that is kind of interesting is that like the battery in your phone, for example, typically the more you charge a battery, the kind of worse it gets. So you can, in terms of stability, you can sort of think of this like, like a battery is sort of like, I don't know, when you charge it, you're like pumping water up a hill or something like that. So you're always creating something that's like less stable when you charge it. And then when you discharge you're going back to something more stable. So in general, you kind of wanna stay at like low states of charge. But that being said, like your phone manufacturer kind of limits you. It doesn't let you charge high enough to the point where like things start to get really bad. So they're kind of like trying to fix that already, but I guess in general, there's always sort of a trade-off between like how far you charge the battery and how much energy you can get out of it and then how long it lasts. So even using the same material, you can make the battery last longer if you just don't charge it as much, but then you get less energy. So they sort of, the company's already sort of, they have some calculation for whatever, how far they'll let you charge it. Yeah. My professor one time was kind of joking that like Apple or something, they should give you like an emergency button on your phone that if you're like, you really need it and it's out of power that just lets you like completely destroy the battery, but use it for another 10 minutes. In theory, like that could work. So I don't know. That's an idea. Anyway, yeah. Great. Unfortunately, I think we're just about out of time. So I'd like to sincerely thank Peter, Emily, Julian and Lily for excellent and very inspiring presentations. Richard for coordination and all of you for great questions. Thanks very much.