 Good afternoon, welcome to today's first energy seminar of 2023 and first of the winter quarter. Thank you all for coming. Before we get started, since this is actually also a class I'd like to introduce the team that puts the energy seminar together, I'm John Wyant, the faculty director. This is Sarah Weaver from the Precourt Institute, who's the seminar coordinator and outreach manager. And this year, we have Akruti Gupta as our CA. Akruti, fortunately for us, took the class about a year ago. About a year ago. So if you're a student and wondering what's going on, we don't have time to go over all that now. Just go log into the Canvas website, and it'll explain it all. And if it doesn't, you can contact Akruti. So today, I'm very pleased to introduce our kickoff speaker, who's a really good one, given the state of the world. No, he's not going to talk about flooding, but he will talk about electricity grades, which is probably at least as exciting to most of us. And his name is Alec Stankovich, and he's a distinguished scientist at Slack National Accelerator Lab up the hill here, up Samuel Road, I guess I would say, at least the way I go up there. But he's had a very, very distinguished background in the area of electricity networks. He has both a master's and a engineering doctorate from the University of Belgrade, which I think is in Yugoslavia. Last time I talked, a PhD, also an electrical engineering from the Massachusetts Institute of Technology, has taught at Tufts University a few other places, is on many advisory boards, and done a lot of industry consulting. So if you look at his topic here, the Sog of Electric Energy Networks, from Ode to Lament and back, I think I confirmed this in our preamble, shit yet, is going to kind of look back at how we got to where we are today, going way back, which is in itself an interesting story. But I think the method and that madness for this kind of seminar, and then use that to protect for it to see how we could do better than we're currently doing in the future. So to me, he's kind of an example of the old John F. Kennedy dictum. Some people see things as they could be and say, why not? So Alex, take it away. Thank you. Thank you very much, John. Thank you all for coming to my seminar. I will try to stop roughly among these five stops that you see. So I would like to talk first about some issues with current systems. I will propose a decomposition, which will help us make some problems quantifiable. Talk about the current and coming efforts to electrify things that are not electrified, like transport and some industries. And then talk about two-person vignettes, the work that I participated in. It kind of spans the spectrum from very high level, sort of almost policy-type work to some fairly technical part of system identification. So with that, why do we have this dual and sometimes conflicting perception of electric energy systems? Well, there's a little reminder that 20 years back, electric energy systems were declared to be the greatest engineering achievement of the 20th century. The century that brought us so much exciting stuff. So this is certainly a very strong state in the world. On the other hand, on the bottom, you see a snapshot from roughly four months back after Hurricane Fiona hit Puerto Rico. And you can see that there's essentially no electricity on the island. So a million and a half customers, four or five million people, completely without electricity. So how come that if something is so good, why is it failing in such a major way? And I would like to start with a context. This is a very busy slide. So please forgive me for too much detail on it. This is a standard map of energy flows in our society. Lawrence Livermore Lab has been doing those for 50 plus years. This is one fairly recent one. Again, very busy, but it does have some key ingredients, if I would like to point your attention to. So on the left, you see different sources of primary energy. Then in the middle, you see some conversion, typically through electrical, as you can see here. And on the right, you see these pink rectangles, which are main users of electricity, residential, commercial, industrial, and so on. And so these kind of so-called Sankey diagrams, they're width is corresponding to size. So you can see that the different sources participate, different, you see that the gas is very much there. You can see that petroleum is even bigger, but on the bottom green, you see that it's kind of in doing its own thing. It's mostly connected to transportation. And transportation box on the right is not connected to almost anything else in a substantive way. And there are cross connections there as well. The number there on top that you see is roughly 100,000,000 quads. So quads are quadrillions. These are 10 to the 15. This is a huge number. And if you think what it is, so this is quadrillions of what? Quadrillions of British thermal units. British thermal units are fairly small. British thermal units is what will take probably to heat up this much water for one degree centigrade. Or apparently, this was motivated by the size of matchsticks in the 19th century England. So they're probably pretty substantial matchsticks, but so energy containing one of them is a British thermal unit, roughly. But this number at 97 something times 10 to 15 is huge. Actually, square root of this number is roughly the population of our country. 300 million, 330 million, right? So that means that for each of us, each American, there is roughly 300 million to give or take BTUs. If you think of how much energy that is, that's roughly 10 kilowatts being run nonstop throughout the year. That's substantial, right? 10 kilowatts is not as small. So for each of us in this room, anywhere else, that's it. So this is a huge. So the overall flows of energy are just huge. And I know that there are other ways to quantify flows, like exergy, which is very popular locally, and some other ideas. But I will stick to this diagram because of its simplicity. And I will still argue that it gives us some useful numbers. So not everything, but there is something useful here. More important thing, this diagram doesn't change very much. So this is a diagram from 10 years before. It looks almost exactly the same. The total is almost exactly the same. You see that some things are kind of thicker, especially the coal is thicker. I will go back and forth. You see the black line is kind of thicker here, right? And the gas is a little thinner than 10 years ago, but most other things are the way they are. So there's a message there for us that the energy systems change very slowly, the infrastructural systems. And actually, that also means that if we want to change what we see around us today, we have to start today, because it'll just take a long time for this change to manifest itself. So if you think, why electricity? Why should we think about electricity? Why is it an important box there? Well, I would say it's several advantages. It's efficient transport and utilization, precise control. And it can play nicely with other energy vectors like hydrogen or methane or ammonia. So these are all very good things. That's the reason why electric energy networks are all around us. It's not without problems. First, production of electricity is a dicey thing. The renewables are our preferred solution, but they're highly variable. And we'll talk about this later. How can we address that aspect? Storage at scale. I'm thinking about utility size storage for say three days or more. That is also not easy. And maybe the problem that's gonna cut us more and more is what are the required materials? And materials needed for some of these changes are either not there or they happen to be at places that are not easily accessible. So we'll talk about this as well. So electricity does have a role to play in future energy systems. But then why networks? Why are networks important? Well, I would say that because societal expectations of electric energy or any energy system, there are many. But among the most important are reliability and resilience. I will talk about each of these in detail, trying to distinguish between them. But these are also engineered systems, meaning that they have to be considered, have to be designed for both normal and faulted operation. And often the faulted operation is one that has more binding constraints than normal operation. And also they are infrastructural systems. They are operational 24, 7, 365. So any time they are not is actually a major downside of their operation. So I want to show you a picture of a place where I lived for a long time for three decades. It's in New England. So this is a picture from last summer. So just a grab from the website of the New England ISO, independent system operator. On the left, you see the roughly diagram with the New England states. You also see prices at different nodes because it turns out the network has some constraints. So those prices are not the same. But if you look at numbers, these are prices for megawatt hour. So this is actually not bad. This is a wholesale price, but this is $0.12 a kilowatt hour. That's pretty good, at least from our perspective here. But you see that system is actually quite, underlying system is quite complex. And on the right, you see internal presentation of the system. So these are different states. The major substations, transmission lines, and so on. So there are levels of complexity, if you wish, in the system. If you look at this a little bit more, it so happens that the New England dynamical model, so-called New England 39 bus, which is the network that you see on the left, is one of the standard benchmark examples that people who study dynamics of power systems use time and again. It has 10 generators and 39 nodes. So the lines are just transmission lines. The short lines are buses at major substations. And the circles are generators. And this is a very well-known example. It goes back many years. Again, it's used world over for validating different, especially controlled ideas. But you see, each of these circles used to be a physical generator, a unit that you can walk to. These are fairly major. Actually, most of them are physical. Number one is not. Number one that you see there on the lower left corner is a representation of New York. And that's actually an important point, because almost every network that we study is a part of a larger network. So we have to find good ways to represent this outside world so that our conclusions have some value to them. But now, these circles are becoming something else. And what you see on the right is what these circles already look at some places or may look very soon. You see many renewable sources, your symbols for wind and solar and so on, energy storage, industrial customers who are selling and buying electricity. So what used to be represented as this example is just a physical generator. It's actually not. It's something else. And that has important consequences. I will show you some dynamical simulations. So on the right, with these nice, smooth curves, you see transients that occur in old electromechanical systems with large physical generators, a couple hundred megawatts each spinning happily. So this is a transient that when you have a short circuit and then the system recovers in roughly two seconds. That suggests that these are pretty fast systems, because this is a pretty large area of the country. And two seconds is decided if it's stable or not. This is so-called transient electromechanical stability. And that's where things are. On the left, you see what's coming. If you replace these generators, these physical electromechanical systems with these agglomerations of inverter-based generation, like solar, you get transients that you see on the left. They are also stable, right? But you see these very fast wiggles which suggest that dynamics is becoming good deal faster, maybe in order of magnitude faster. And also, the transients of these new sources are largely determined by the settings of controls. In other words, control has much more authority than it used to have in the old electromechanical system, which is both good and bad. It's good because there is a hope of us achieving responses that we couldn't achieve before. But it also means that if we set it wrong, that we will get instability more often. So that's coming our way. Challenges in operation are not limited to transients a very short time. This is a 24-hour diagram from a California independent system operator roughly more than a year and a half ago. And so you see the power in megawatts vertically and hours in the day horizontally. And there are two diagrams. One is the load, the overall load that, in this case, California ISO is overseeing. And the green curve underneath is how much what was the net load would have to be supplied by the ISO. And this is known as a duck curve for obvious reasons. It looks like a duck. And you see the problem happens kind of at, say, 5, 6 in the afternoon. Because as the solar power goes away, the other sources have to pick up. In this case, you see they have to pick up 10, 15 gigawatts in the space of hour or two hours. That's tough. Even if you have plenty of energy, it's hard to fall. And this curve is getting actually steeper and steeper. So much so that you may say, well, will this belly touch zero? And yes, actually, it can go under zero. This last summer, there were times where the net load was negative. What does that mean? That just means that California ISO was exporting power to states that we are connected with. But there is more to it, actually. The system, while you say, oh, this is great. That means that the system as it is today could not be operated on renewables only for reasons of stability. Of course, there is lots of effort in that direction. There's something called grid-forming inverters, which are better-behaved inverters from stability standpoint. But this is a challenging problem. So in a way, we can supply 120% of our work, but we cannot supply 100. And so there are still technical issues to be resolved. So now, turning on defining reliability and resilience a little more precisely, you see the advantage of the system solution is that it can achieve reliability that far outperforms reliability of individual components. And so what kind of reliability numbers we are talking about? Think of a simple experiment. We just walk to the system and we see if it's working or not. And we know that. So is it up or is it down? So then, reliability of three nines, which means that they have 99.9 reliability, means that during a year, which has 8,760 hours, there will be nine hours of downtime. Well, that's a lot. And you can see now for 5,7 and 9, you see that five corresponds to five minutes a year, seven corresponds to three seconds, and nine nines correspond to only two cycles, two 60 hertz cycles being missed in a year. So these are, of course, extreme numbers. But it turns out that these numbers in the middle are kind of what we see at some places. For example, utility systems today are between four and five. Our systems are probably closer to five. Systems in some well-run places like Netherlands or Belgium achieve close to five. But these are global numbers. But you see server farms need seven or eight. So what does it mean? Well, that's obviously very hard to achieve. And so we may think, well, that's not what general utility networks should provide. If they need that reliability, they should kind of do it themselves. And they actually do it. They have plenty of backup to achieve those numbers. And just to give you a feel how extraordinary these numbers are, compare with some other events. Losing in a roulette, if you take the lottery, it's roughly 1.6. So that's not that bad, right? And being a flight without fatality, meaning not that the person will not come alive, but that there will be someone on the flight who will be a fatality is seven. And losing in a powerball lottery, which I guess now routinely runs into 100 million, is eight. So we are talking of similar numbers. These are very, very, very events. And we would like to achieve that level of performance. So that's why it's challenging to operate electric energy networks. System do fail, however. And this is a composite satellite picture from roughly 10 years ago, when much of the Eastern Seabourg network went down. It's a composite, and you can see on the left, you can see places like Detroit, and then Toronto, Ottawa, Montreal, on the right, Boston, and Log Island, and New York on the left. You see Boston is very, very, very bright there. And if you were to ask any of engineers in my former place, they would say, that's because we are excellent engineers, which may be true. But actually, it so happened that that day, New England was exporting power. New England typically exports power for five days in a year. And that was one such day. So if you have extra power, it's much easier to cut down and to make up for the short. And that's what happened. You see that the Log Island was still recovering from the event even seven hours after. So these are major events, and these are massive damages incurred by many, many people and organizations. So systems can fail. So by the way, this event was initiated by three falling on a line near Cleveland. So obviously, very far away places were affected by an event like that. So now talking about resilience. Resilience is a newer term. So the big difference for purpose of our discussion is that resilience deals with high-impact, low-probability events, like hurricanes, earthquakes, things like that. And so again, we would like to limit the extent and the system impact. And in this case, we would like to sustain critical services. We know that not everything will be fine or everyone will be happy. But the effort is just to make it through such event. And so I'm just repeating essentially here that operating reliability is the ability to withstand sudden disturbances, but sudden common disturbances, like unfortunate squirrel jumping on a transformer and things like that. So this happens quite regularly. And it was observed many times. And utilities have developed practices to deal with such events, typically by having spares, like having more than one component. So as components are pulled out, the remaining components have enough capacity to pick up this liken. That's it. Resilience, on the other hand, deals with these low-probability, high-impact events. This is scenario-based. The analysis are different. So I think that justifies the requirement that utilities should consider both of these in parallel. It's a coordinate, of course, because often it's the same equipment that addresses both. But these are different because, during resilience events, these are different paths, different operating modalities that are normally not exercised. So every now and then one should plan for them so when they come around, we are ready. Because this is another picture from Puerto Rico from 2017. This is Hurricane Maria. And you see that the same thing happened. So the top is before. The bright spot is San Juan. And you see that after the hurricane, there was nothing. There's a couple of days after. So we are not learning much from resilience events. Or maybe sometimes they are so overwhelming that it's just hard to prepare for them. Now I would like to switch to this decomposition of social cyber-physical systems. These are complex systems. And you can think of many layers. But I think that these five are so close to a minimum to have a quantifiable discussion. So I'm talking about flows of policy and legislation on top, flows of capital, information, energy, and material. So within each of these flow layers, you can define typical metrics. So affordability in the capital layer and efficiency in the energy flow layer, and sustainability in the material flow layer. And then reliability and resilience are really coupling these layers together. And I would like to go to some examples showing how that happens. So electric energy systems are multi-scale and they're hybrid. In time, we're talking about 10 orders of magnitude. Because in power electronics, for example, control people worry about things like tens of nanoseconds in dealing with precise switching times, all the way to weeks or something, even longer if you have to plan, say, a cascade of hydroelectric plants, sometimes years. So then probably more than 10 orders of magnitude. In space, we're talking seven orders of magnitude. Because you can think of power systems being something the size of this desk or this room to spanning a continent. And in power, you're talking at least from, say, so from milliwatts to tens of gigawatts. So another 10 orders of magnitude. So this is a huge space, if you think. So it has many corners. And many, they're not the same. And one has to study them in detail. But I would like to suggest that there are lots of commonality there. And so I would like to talk about those typical cases. Again, we have to talk about normal and faulted operation. And faults can come from nature. Or now, unfortunately, also from active adversaries. The system really has uncertain input. And it's actually regularized by physics. So it's predictable because physics taking care of some of these inputs, think of them as stochastic processes. And the interesting thing about electric energy systems is they have sparse sensing and intelligence or computational power. And actually, most of it is at the periphery. So it has implications on how we can and should control them. So if you allow me now to simplify and replicate this Sankey diagram, just in two layers. Energy on top and information layer here on the bottom. We can talk about primary conversion, about the network, about end user. And control, which today typically uses measurements in the network and actuates in the primary conversion. That's the control loop. I'm not thinking of a centralized control. Of course, it's just a conceptual. There is a layer of controllers, which many of them are distributed. Now, what are the issues now? Well, the input here is W is too large. And little of it comes from renewables. The system is actually unable to integrate now all components. And we can talk about this a bit later. It has non-functional markets. And it's over-designed for two reasons, at least. One is the variation of the output. It's called Z. It's simply low. There is at least one to two, sometimes more. And all components have to be designed for the peak load. But there is another reason it has to do with fault accommodation. Much of the fault accommodation today is done in hardware based on local simple measurements. For example, current has to go 10 times the rated current for some breaker to be triggered to pull out. But if the current is 10 times, that means that forces are also 10 times, or maybe more, depending on. So we're talking about huge forces. And that's why substations are so big. People say that if Tesla or Edison were to come back, they would be amazed by many things, but they would recognize substations right away. Because the spatial size didn't change much. So why do we need more electricity? So I'm going back to this busy diagram now. I want to point out some numbers. You see that the overall electrical output right now, the top right, I hope you can see the square on top, on the electricity generation, it's roughly 12 points something, 12 points of change. If we were to include industries that you can see on the third pink box, their output is roughly the same. I know that some efficiencies can be improved, but that's kind of the same order of magnitude. If big industries think cement, petrochemicals, iron, and steel, if they were to be electrified, that would be the total energy consumption is roughly the same of the total of electricity produced today. If we look into transportation, of course, electric cars are more efficient, but you see the current input there, which is on the side of order of 30-something, say that we can double the efficiency, which is some optimistic by going to electric vehicles, we would still get a half of that to the output, which is another or 12 or 13. Which means that if we were to fully cover industry and transportation, we would need to add twice as much as we have now. So this is obviously a major effort. And I would like to point out these ideas of electrification are with us, but they're not all that new. This is actually exactly 100th anniversary of a speech that Vladimir Lenin gave to the Moscow Soviet, when he said that in his mind, that the communism equals electrification plus power to the Soviets. Like someone who grew up in Eastern Europe, I'm glad that the first and last thing didn't come out to be true. But he did have a point that electrification was a major change in Russia and elsewhere. And so if we think about technical progress, the way we have it now, in what impinges of the electric industry, we're talking about renewable generation, about storage, about electrification of major industries and transportation. So these three all of course drive decarbonization, assuming that energy sources are renewable and not better than current. And all together, they can hopefully stop the climate change. So this is a game that we're playing. We can even paraphrase that systemability really equals decarbonization plus electrification. So some of these ideas have been with us for a while, but now I think they're ripe for re-evaluation. So how should we integrate renewables? I think one idea is that we have to kind of switch more to the source following, meaning that we should organize processes that can be controlled and scheduled so that they follow the availability of energy, rather than kind of massaging and controlling the sources to follow arbitrary or quite unpredictable loads. So how do we achieve such coordination? Well, one is that we need to operate a system with larger spatial areas. And that's where electric networks come in. And they allow essentially if you have a bunch of stochastic processes, you will get something that's more predictable if you integrate, right? If you integrate literally as a mathematical operation, or if you integrate in engineering sense by having the electric network on top of them. So for a long time, the future energy systems will be hybrids because they will have some conventional sources. Because big hydro and possibly big nuclear units will be with us for a long time. And there is no reason to touch them, especially while we are short in overall energy. So the systems will be hybrids between this electromechanical and electronic sources. And how do we achieve flexibility in operating them kind of on a second by second base? Well, better forecast dispatchable loads, buy-sell algorithms which try to smoothen and storage. So these are all the tools that we have in our disposal. It also may be interesting to integrate into larger structure. So first, integration between transmission and distribution. You may think, oh, that's nothing to it. Isn't that the same thing? Well, it should be, but it's not. Because within the field of electric power systems, the distribution and transmission are often two worlds which don't talk. So now that's changing, of course, but that transition has to be completed. There could be multiple energy carriers, like, for example, hydrogen. Or in some transition period, adding hydrogen to natural gas, getting what's called hydrogen-enhanced natural gas or hang. And energy hubs, which are ideas that you could coordinate different forms of storage, think electricity, maybe gas, certain district heat, and coordinate them, not in a building, but coordinate them in a control sense, in software, if you wish. So how would that change your future electric energy systems? Well, one is that we should develop the information layer more. And you see some additional single line traces on the bottom. So there should be more sensing, of course, coordinated control. Control, that's typically distributed. So it has local component and some global information that provides context. We need better blocks. So yes, improving efficiency of natural blocks is very much needed. So we hope that more of the inputs will come from renewables. Given the complexities of control that we have now, probably we should decouple the system. And the decoupling and having fewer layers to coordinate at any given time can come from two sides. You see, from the top layer, it could come from, for example, high-voltage DC transmission networks, for example, to couple east and west of our country, which are now two electrical islands. But from the bottom, it's through ideas like microgrids. Because you have buildings that can be controlled, especially if they have storage, they can behave in interesting ways. So these two I think would allow us to control this larger system by not having to juggle too many layers at the same time. And also, we need better design while steering component development. What I mean is this. All of us have phones like this, right? Good thing about these things from design standpoint is that every couple of years, I guess three years, if we have teenagers like I do, that's less than three, they are changed, right? There's a new phone. And that means the old phone goes away. And the new phone has components of many types. If you're an electrical engineer like myself, you would think it was a pretty complicated device. It has analog electronics, digital electronics, lots of some high frequency or radio. So these are distinct parts of electrical engineering. So they all integrate. But lessons learned in one generation are fully passed into the next. On the other hand, think about this room. If this room were to be renovated, many things would change, right? Chairs, projection equipment, maybe even an AC system. But chances that wires will actually be changed at baseline, right? So wires that are now in these walls will stay there for a long time. So that translates to larger systems. So when we are designing a new system, we have to have in mind that there will be many changes coming, including technologies which we can barely predict. So that makes for a challenging design. So by the way, in this logic, the idea of smart grid is really these two arrows that you see in purple, right? Because we are adding the end users and we are adding control to end users. So that is what smart grid is in these coordinates. Of course, in addition to storage, which I added on top. So where are we now in terms of our ability to control systems? I'm showing you this diagram, which is, of course, notional, but think of it as being in a large scale. So vertically, you have increasing spatial dimensions. So from device to substation to region or utility, think of PG&E, balancing authority like California ISO, wide area, which is a couple of ISOs that together deal with reliability and then continent wide networks. Largest in our country is Eastern interconnection, but Western interconnection is also quite big. On the horizontal axis, I have time, but in the opposite. So the shortest time and to the right. So from day, hour, minute, second to the cycle. Electrical cycle, 60 hertz cycle, I think, 60 milliseconds. And this line is roughly the front where this is where our ability is to control these systems. The green dot, it is in the middle. AGC is automatic generation control. That's a control frequency. That's a fully distributed control. And it's been around for 70 years. And that's the first industrial distributed, fully distributed controller. And it's been incredibly successful. But you see it controls frequency roughly with a minute or sub-minute time scale at best. It works very well. The frequency is typically kept a few millihertz around 60 hertz. So this is really dead on. So but of course, our interest is to move this technology front to the right corner so that we can control larger areas and faster. But it is a challenging ask. So I think that the way forward would be what I would call heterogeneous infrastructure of VLSI because you see the electricity can be produced with many technologies. So but what we can learn from our colleagues who do VLSI is the planned repeated use of carefully designed structures. Because the way our colleagues figured out how to put billion things on a chip or more is by carefully repeating the same structure. And we can learn from them. Because if we are to move from electromechanical sources, which are maybe 2,000 large generators in North America today, to millions, if you have renewables connected to inverters, we have to find ways to scale system while maintaining things like stability. So drivers for this development are system ability and cost. Enablers are power electronics, sensors, cyber networks. And again, source following the new ethos, which I think will be needed, comes from markets and forecasts on the slower timescale and then from network and storage in the fast timescale. Because you need to achieve it really second by second or less. And monitoring has to include both reliability and resilience. Fault accommodation, I think, should move more to software and be faster. It's a very tricky proposition, by the way, because protection is what keeps many places from burning. Because electrical arcs tend to be very bad in terms of fire dangers, so they have to be controlled very precisely. So protection is a very, very important part of electric energy systems. So if you change it, you're also kind of asking for lots of responsibility. But I still think that if you can make systems faster, we can make them smaller and simply have a better spatial utilization. And then modeling and comprehension, you see, there is a long tradition in this field. And several nearby fields are using physics and chemistry and our basic understanding of such processes to control systems. There is nothing wrong with this. But I think it's time to append it in places, I will mention some, with what can be extracted from data. So I think the two can play together quite nice. I mean, there is a hope on the horizon. There is a plot I borrowed from Environmental Defense Fund. So vertically, you have gigatons of CO2. And on the horizontal, you have prices per ton of CO2. And you see, as prices go up, of course, there is motivation to remove more CO2 out of the atmosphere or not to generate it. But on the right, you see some of the interesting technology starting with onshore wind. And then not far down is offshore wind, which is what I would like to talk about next. So this is first of my two vignettes that I would like to wrap up today. So this is a work I've done with colleagues at Tufts about planning for offshore network near Eastern seaboard. Here you see, on the left, of course, Eastern seaboard. On the right, in the same scale, you see North Sea, essentially, Northern Europe. You see Britain on the left, Belgium, Netherlands, Germany, and Denmark and Norway on top, actually Sweden on top. So this is the same scale. So you see many big offshore farms in the North Sea are these yellow areas. And you see that, simply, topography is different. These areas can be reached from many directions. And they are. They're actually even building artificial islands to achieve. So that means that these sources can have multiple feed-in and feed-out connections. And that's very good for reliability, of course. In our case, that's not quite possible, because the coast is almost linear. But it will be very important in our thinking to actually connect those sources as well, rather than just having a separate connection from each source. Why would that be? First, these are massive infrastructures, if you talk about it. So these are large wind units, which is actually not the largest today. It's 12 megawatts. They're 18 megawatts also. But this is the height of the Trans-America building in San Francisco or, in this case, in Boston, the tallest building, Hancock Tower. So these are very, very large units. And so in our case here in California, they would probably have to be floating. So that's another layer of complexity. Here are several solutions. By the way, the oldest offshore floating wind farm is five years old now. So this is not very new. But there are things to be learned. But this can actually solve one other issue which is here in the background. So here is a proposal amount of several for building a national macro grid that will connect our country coast to coast. And many of these horizontal links, especially are there, would be fairly straightforward to build. But on both coasts, it's very hard. So actually in the east coast, we think actually this offshore connection could be the part of this national macro grid. And I think, assuming that there is a similar development on our west coast, we could go for the same. Of course, there is a mighty Pacific intertide coast north-south that's a backbone of the Western electric energy network. But adding another one offshore could actually be useful. So now I would like to talk about my second vignette. And this is a joint work with my friend and colleague, Mark Translument, Brigham Young. So here is a standard form of dynamical models in electric energy networks. So you see these differential algebraic models. X are the differential states, Z are algebraic states, U are controls, and P are parameters. The algebraic parts of the model essentially come from a multiscale feature. These are singularly perturbed models of a short time scale. So they are approximated with algebraic equations. And there are measurements, which also often happen to be nonlinear in general. So if you have a model, how would we evaluate that this model is good? Well, we would hopefully get some data or some simulations that we trust. And then we tweak P, we tweak parameters to get these to match. But it's not a simple problem because the question is, what model should be tweaked? In other words, for a given model that we have, do we have a chance? Problem with optimization, of course, is that often you get a result. But how trustworthy is that result? That is the problem here. So local methods for solving this involve Jacobian, which is derivative of the measurements with respect to parameters, a matrix of such derivatives. Or Fisher information matrix or Hashem, which is often approximated as this Jacobian transforms time itself for small deviations. So these, by doing matrix trickery, we can get the local information about how well behaved this fitting problem is. The interesting piece that comes from, I think, from this work, which is largely influenced by physics literature, is that there is a concept of sloppiness, which is in technical sense, which says that there is a class of models in which there is a large parameter uncertainty. Namely, we think of a model with many parameters and mapping from parameter space into a data space. But this mapping could be highly unistotropic. Namely, think about on the left here, the one you see there, a ball in parameter space. And then it gets mapped into something quite squished and elongated in the behavior space. So there are directions in which small changes in parameters correspond to a lot of changes in behavior. These are the so-called stiff directions. But there are other so-called sloppy directions in which you don't see much response at all. And then the idea is to, it turns out that if you think now that all parameters are, say, limited to be between 0 and infinity, and then you think of what are the all possible models in this data space. It turns out that often these are bounded manifolds. They're not linear. They could be quite ugly, but they're bounded. And then, actually, if we, and this is magic of my friend, Mark, that he can calculate geodesics in high dimensions, geodesics being, of course, counterparts of straight lines in curved spaces. So if you can calculate geodesics on higher manifolds, you can try to sniff out, to look for corners. And that's justified by noticing that in many systems, not just in the electric energy network, but elsewhere, you have these manifolds that look like ribbons. They are quite wide in some directions, but very thin in others. So then, if we don't have other additional information in terms of what are good models to try to identify, would be those that are not too thin in any direction. So we would like to use this method to reduce the models to something that we have a good chance of identifying and then do that identification. So we've tried to say, so again, idea is that you have some, of course, in plane, you can go far down, because you're going one dimension down at each corner. So from initial parameters, we have, say, two. Then in the first iteration, you go down to one. And now you're on this top green line. And then you repeat the model reduction procedure. Then in the second iteration, you get to this point. And then you're done if you're starting from two. But in higher dimensional models, you can go further down. So here's an example from the dynamics of power systems. So these are measurements of a generator unit. So here you see the calculated manifold. And you see this transition, the geodesics, that actually hits this dark blue line, which is the edge of the manifold. And this is the model reduction here. In terms of what it is, so here you see what's happening. This is a small enough system that you can track it. On top is the calculation of the geodesics. You see the arc length tau, which parametrizes the geodesic. On the middle trace, you see that there are two parameters. These are two time constants. This is a standard model of synchronous generators. And so what happens is one of them, in this case, is surprising to the people in the field, but not that the q-axis, which is one that's typically kept, is one that gets reduced or evaporated, the other axes. These are the two axes, theorists, standard tool to understand behavior of very large electromechanical sources. So in this case, data and simulations are telling you that you should do this reduction. You can think, well, this is just a variant of a synchronous perturbation. It's good to know, but in a sense, it's reassuring because that was observed before. But there are other cases in which things are less obvious. So this is a case of what's called doubly fed induction generator. This is a kind of not very modern, but very widespread technology in wind generators. I mentioned already that behavior of these units is largely determined by controls. So there are two, so the two parameters that you're playing here, here are essentially scaled, integral, and proportional gain of that controller. And so this in parameter space, and you see that, so these are criterion values are the colors, the thick lines are geodesics mapped into parameter space. And the green dots are essentially Monte Carlo simulations, but with procedure that kind of tells us to sample more often. So it's just a tweaked Monte Carlo. So just to see that this picture is somewhat realistic. So you see here, if you minimize, you actually get the value that you see here, the origin. But you see that there are two problems here. That if data is not of a lower resolution, this ridge that looks kind of like an L thing going this way, we can escape either way. So we can either get one of those, we can get the horizontal gain to go to zero, log of it to go to zero going left, or the other gain could escape going down. So that suggests that this is a tricky model to identify. It's a standard model. But if you don't have the high quality data, it is not clear that you will do well. Okay. So let me wrap up with some comments about how can we integrate or connect such models with a data-driven model. So we've done some work on it, on symbolic regression with physics and form dictionaries, universal data, the right correlations, the joint work with Yanis Kibrikidis at Hopkins. So some of our results in this suggest that you should once you interlace physics and in this case deep networks in tapestry like fashion. Because there are pieces for which you have very good physical knowledge, and it makes no sense to throw it away. But there are pieces in which you know much less. I mean think about in the case of synchronous generators, there's actually beautiful theory of almost 100 years of applying starting from Maxwell equations, and then massaging them in interesting ways and coming up with good models that we through this test of time over decades. On the other hand, when we model loads, like our campus or Palo, we start with some basic statistical well, how many loads there are and then we add them, average them. So clearly, this is a much cruder approach and it makes sense to balance it, but if we don't have a fundamental science understanding of it, we might as well see what data is telling us and use that. All right. So on my last slide, I would like to say that societal expectations from electric energy networks, there will be carbon free electricity, there will be efficient, reliable and resilient networks, and there will be functional markets in public policy. We know that there were exceptions to this. See, the key driver for this transition is sustainability. We're doing this because of unacceptable impact that electric energy systems have on our environment. The key enablers, I think, are in the information flow layer, because that's the cause that's evolving the fastest, and it can certainly make a difference. But you see trajectory will be driven by material and capital flow layers, because can we find materials to build what needs to be built, and can there be enough support in terms of money to actually do that? With this, I will thank you. I will stop here and ask you for questions. Thank you. Thank you very much, Alex. That was a fire hydrant flow of provocative ideas. I think it's going to take me a few weeks to catch up to where you were with that. I would say just as an overall comment, so you could conclude that the biggest invention of this century might be a variant of the biggest invention of the last century. Now, as someone who is that field, I would say yes, but there's lots to go in this century, so we shall see. Yeah. Good point. So we have time for a couple of general audience questions before we move into the student session. Do we have any questions from the audience? Big picture, small picture, big devices, or? So what do you think? What do you think the prospects are for a nationally interconnected grid? That seems like hard to me. It is hard, and I think there has to be a good motivation for it. I've heard the stories from my colleagues who were there. There were attempts to do it in the past, and I think the famous example that I heard about from many people is that they tried to interconnect it, and there is a four-corner area in New York. There's only one place in the map where there are four corners, four states in it, and there is a nearby large installation, and so there was a unit there which unfortunately interacted dynamically with the unit in New York. So this is very far, and interacted in the sense that some very large things got twisted that shouldn't be twisted. So a decision was made, this shall not be done ever and we are done. But I think that may change, especially in motivations. I think if, for example, if there is a large and continued large development of an onshore wind saving the quotas, and then it'll make sense to try to connect this with our side, with the West Coast. So that will be one motivation. I think that it should be done. So there has to be economic reason for it, but I think it should be encouraged by other means because I think that the large networks are eminently feasible. I mean, all of Europe is essentially one network. Network in China is bigger than any of our... So there are technical means to operate large networks. So, but it does, of course, to have a large network, you have to build it. And that requires, of course, building that, and still the timeline of getting permits for transmission lines, I think it's in decades, that's the unit. So that is long time, and especially in current climate of very large economic expectations or on short timeframes, that would be hard. Any other audience questions at this point? I know we're running a little bit late. Actually, another question I think I'd ask, kind of like Linz, but in a little bit, the other direction. So whose job is this anyway? Obviously, you were recommended to us by the Bits and Watts folks here. You're at Slack, so is that one way to get started, to have the teams around this area get together to see if at least you could scale things up to that degree? Or are you already working on a bigger plan that might involve people on the East Coast or Eastern Europe or what? Oh, I wish I were working on it. So do you have any thoughts about that general question? These ideas are terrific and you're starting to make significant measurable process. But to get to where you wanna go from the top of this to the bottom, it seems like there needs to be a bigger plan. I know this is totally on for a question, but I ask it anyway. Right. Yeah, I think that there are several things that have to come together. I mean, if you think of technology readiness levels, which is this NASA produced and now widely used way of measuring different technologies, and universities are of course very good in doing the kind of fundamental research. And then, so say levels one, two, three. And then there is proverbial value of that four, five, six where technology is sort of transferred from lab demonstrations to something that's realistic. And then there are higher levels, seven, eight, nine, where industry and private enterprise can pick up. So I think the middle area, the hope is that national labs can do it. So the Slack is there to, for example, to cover it. So Slack being such a closely integrated part of Stanford is a great idea in that regard. Of course, there has to be support for this. And I'm a beneficiary of GSEP from 10 years back. And I know that this was unique because in our case, it was the GSEP support that allowed a number of us to then go after one of these engineering research center awards with NSF and 10 years later, I think we produced something useful. We certainly have fun while doing that. But there has to be a sustained support. And I think the span is so wide that it requires kind of all of the above. Great, on that note, thank you so much for sharing your wisdom with us today. And good luck in the future. Let us know what we can do and you do likewise. Thank you very much.