 All right, my name is Yang. I'm very happy to be here to talk about my progress at Arizona State. Arizona State has one of the best power programs. So I hope that I can give you some good insight about how we look into renewable energy in Arizona. Today, I'm going to talk about the short AI for decarbonization. You may say, oh, AI anywhere. Anything you want, you plug in an AI node. That's not the case. Here, put the short AI. I will explain why. Because you cannot say, you put the AI and it's not working, then it's not working. You want to make sure when you plug in, it has a confidence. For example, you have a nuclear power station. I definitely want to get the number 100%. I don't even want the 99%, right? One person, we are dead. So that's really crazy. So I want to talk today about a high-level idea. I know you are coming from different backgrounds. A high-level idea, how we can look into deep neural network to assure we get something that is structured and very much confident. So over here, I give you an overview of decarbonization. What is that? We have so much solar here. I put a Uber, Tesla car all the time from yesterday to today. So you can think about solar to be there. At your home, maybe on roof of some electrical vehicles. And then you have electrical vehicle charging. You may say, we have that already. But you only have 20%. If you have 100% of electrical cars on the grid, crazy people charging, discharging, trying to do the demand response, then we may not have the grid ready to do the job. You may say, hey, Yang, you have department energy. Doesn't that give you some money to make the infrastructure super robust? Yes, they do. But it is not scalable. Looking to here, this is a data visualization from Pittsburgh, Pennsylvania. Different data points from smart meter, solar panel, electrical charging station, yellow. And then phaser management unit, we're expensive device, green only two. And the remote area, the color, I mean, it's vague. That means you have less observability there. Let's say you are meaning there, you are there and your power quality is really bad, what will you do? You complain. Then I lose my job as a law engineer. Now, if I talk down rather than button up, what we have nowadays is we have a power grid that is centralized, like I'm the system operator, I control what is working in San Francisco, in Palo Alto, maybe in Phoenix. But think about the countryside. For example, for the students here, if you are living far away, maybe you do not have the same infrastructure like what San Francisco has. Then you can have some sensors, the same as the middle part. Sometimes maybe your sensor is very sparsely located. If you go all the way to the other countries that are South American and Africa, so maybe this is dominating their scenario, then how can we ensure renewable is scalable, scalable all the way to the corner. So that's my job. Maybe not for San Francisco, but yet making the everywhere in the world. So the problem is we do not have sensors. So how can you do the control? Then this is what I am looking into, I say, I have electrical vehicles, I have electrical, the airplanes are electrical or electrical ships. That is for the transportation, I have critical infrastructure, different buildings, development and defense. And then we have the smartphone for the students over here. Can we do something that we do not need to wait for 20 years or 100 years until every corner on the earth, we have a meter to see before we do the control. So that's my focus. But when you come to that part, you have another question. And what is the tour? Yang, you said AI, right? I guess we will have the AI as the solution. But how can I trust the AI that only has one eye, but he can control a large tree, free, of course not. But that is what I want to talk about right now. What are we doing? We are the human, right? We're looking for the AI, we say explain, right? You try to step up the voltage to make the power grid more sustainable, explain why you want to step up the power voltages. But later on, maybe I'm not asking the robot anymore, but the robot is talking to another robot, say you explain to me, I mean they don't even have the same language I guess because they are different vendors, maybe Tesla versus another company, right? So they need to talk to each other, how can one efficient energy agent understand each other? After that, let's say 20 years later, AI is so amazing, right? We trust them. And then suddenly I'm a volunteer in my operation room, the AI is the boss, you say Yang, you go there, try to trick the line, okay? I go there and trick the line. Then the boss may say, Yang, I don't trust you, you are human, human approach you, arrows. So you can see, I mean, first in the logic, the AI may need to trust human as well, right? We are not in the same language. And finally, I close the room. This is very similar to what happened in the past 200 years between different countries. Human tries to understand each other, how did we do it? And that may be the path to go, to understand how we can let AI to talk to human as different races, what do I do? If I go all the way down to logics. Number one, I need a guy to think critically. He should not be a supervised learning alone, right? Through the training, through the testing, overfeeding, all that. Second, he needs to think far away. For example, I trained a KIA in the United States. When I move it to effort, it's very different. Can he adapt and do something amazing? I mean, with the same performance, maybe no. So he needs to diverge the scenarios he probably expect in the future. Third, we need to let them teach to each other, right? So I don't have time to, maybe you got my call from time to time. If you took some machine learning classes, each program is different, right? You have a course project, you have another one, you are very tired, but there's another one. It's even bigger data sets. So I hope they can teach each other too. Finally, in order to make the communication sustainable, I need them to have a logic. For example, Google has some research, is it Facebook? But anyway, the robot is talking to another robot in a language that a human doesn't understand. I don't know if you hear that news, then they shut down the computer. So yeah, we need them to have a logic to communicate to me. I get up, I'm hungry, right? I eat some food, I'm thirsty, I drink some water before I go to school, then I work, I'm a jetty. So that is very important for a human to understand so to put the human into the loop. But that means we need to first do the theory. I will have some light introduction. And secondly, we need to do technology. I will show you some demos to make you exciting. Third, you need to have field validation, right? Big enough data set rather than some simulation that you did on your computer. Finally, need a policy. We need some government agency to say what should we do to ensure that these are followed by all the companies simultaneously. All right, now I'm going to the second part, which is the main part after this lecture, as long as you can remember these four, yeah, right point in time, where we have it. So critical thinking, build a logic, divergent thinking, structural teaching, how to do that with deep neural network for power system. And this is what I'm going to say on the high level and please remember this picture. Number one, you can think of this as a black box. Now we have the input, we have the output. For example, you try to do a linear regression, regress your input, which is your controller. To the output, maybe it's the performance of your system. Then what I'm trying to do is I say, robot, split your logic, split your learning experience. Try to explain as much as possible. If you cannot explain 50%, then try to explain 45%, but try your best so humans are in the loop. Second, when you say, you get up, you are hungry, right? You say, next step is to eat a burger. So then I want you to tell me, when you eat a burger, that means one hour earlier, you were hungry, right? You need to have a logic, go forward and go backward. Third, when you feel hungry, eat a burger, then when you are thirsty, will you drink water? When you're energetic, are you going to have a football game? So please tell me that you can divert to another scenario that I never told you before. And finally, you are the learner in the past, maybe three tasks. Please try to tell me you can teach, like how I teach you, by separation, by inversing the logic, and then going to different scenarios as much as possible. Then it's sustainable. So now regression, if you took machine learning 101 or data science 101, or even linear algebra, so maybe you will learn this list of squares. Last time, Ariadu from No System gave a talk on state estimation and full analysis. You probably see the same. You have input, which is voltage. You have output, which is power. You try to learn this physical lore. Remember, if I know the lore, that's infinite amount of data. I don't need any data, because I have all the data. I can change the input from net to infinite to post-infinite, and I can get the output. 100% accurate, much better than AI agent. So that is our goal. We try to learn the power flow equation, but it's hard. Think about you have two sensors in the network with 100 members. It's hard to know who is connected, whom and in what parameter range. But anyway, we try to learn these when we have so much out of the mobility issue. So this is my example. This is a network. So in this network, you have many different renewable penetration, some solar generation, somebody who doesn't have a solar panel, somebody who participate in the demand response program. But you can see, some of the nodes are white. You don't see. Then how can you make sure you control some nodes over here? And everybody here is very happy with your controller signal. Maybe I'm super unhappy to make my voltage to be bad all the time. But then this is how we do the job. One, we have a physical system. Aliya, who talk about physical system? This is power flow equation. Then we probably doubt, hey, I don't know your system information, but I know your historical data. Can I use the historical data to recover the physical equation? Think about it. Right now, you are in front of me. And let's have a magic. You disappear. And then I get all the data before the day about you. Like where you play the game, where you have the, yeah, cut the interior, where you go to the classes. Can I create a digital twin based on the historical data of you? Do you have a virtual machine here? That is the magic of data science. And then we probably go all the way to the upper part. So we have our own mobility, but that's okay, right? You cover my face, you still see there's a young over here, you can say, go there, right? I follow your instruction. Do you really need to see before you control? Of course, there's a tolerance, but that is how we pop up the information to do the machine learning. All right? A little bit challenging, deep neural network. So each box over here is a deep neural network. If you are familiar with some Google application, some Amazon application, they try to generate fake images, like maybe for some very famous people. So they put two neural networks there. Why is to generate the signal the artist to say, hey, is this the famous guy? Yes or no? And then they train. They also do the auto encoder. They try to ask, hey, can I compress the image to the smallest and then recover it after a while? So this is the auto encoder. But if you look at what the famous architecture in machine learning nowadays, they just manipulate in different boxes to represent different functionalities that we are asking. They don't use single neural network. They use multiple. All right? They are more over here than I am thinking about. Well, for my power system or the energy system, can I do the same? You can say they don't use 1,000. They use about two or three that I'm starting from two. And this is my master to separate the parts humans can understand and the part we cannot. I tried to use the example in your brain, right? In your brain, they are left and right. Some of them is very good at linguistic part, analytics, and logic control. The other part, imagination, art. Then what are they doing? They are trying to divide the functionalities to different users. On the other hand, if you look into some biology articles, they say they are talking to each other. They are not staying apart from each other. They are in touch. For example, if I'm doing something over here, there are some echoes on the other side. So what does that mean? They divide. And then they come back together to collaborate. So that is my philosophy. I want them to collaborate with each other. Hey, let's do the job together. But internally, they are competing with each other. I'm the best, you are not. So that is how I do two neural networks. Why is trying to learn the physical exactness? I try to put some kernels, like quadratic function that I know. I try to put some semi-dual, soil function, the waveform, like power signals. I say, hey, try to learn as much as possible. On the other side, I secretly tell another guy, please try to use any method you can think about, of course, neural network, to infer based on the input, voltage, what are the output. You can do any decision tree, logistic regression, anything you can think about. And in the end of the day, I will ask them to come over to me and say, hey, who did the better job? So that is my design. So this one get a National Science Foundation career award. They think this is idea to ask them to collaborate and compete, and to split the information like a DLA into two different parts and split it in. So then the knowledge is like divide and conquer, divide and conquer the dynamic programming that you may learn in the past. And this is maybe, oh, young, that's useless to me, it's power, but what I want you to understand is in the future, you may want to do some network. You don't need neural network all the time, but remember to try to divide your learning process into different functions. Some of them may be about symbolic regression, some of them may be universal approximation, some of them may be minimized gain so that you achieve your goal. Of course, you don't want it to be too large like this one because it will lead to overfitting, too complicated model. But this is the high level structure I want to tell you. Now I go to the second part, inverse. When I'm hungry, I eat a burger. When I eat a burger, that means I was hungry one hour ago. So that is the logic I want to inverse. But there's a trouble. For example, we are learning from I was hungry and then I eat a burger, you probably didn't find all the information to be contained. Maybe you eat the burger because your friend asked you to eat the burger with her, right, with him. So maybe some of the information, side information of who are surrounding you, is that anybody forcing you to eat the burger is lost. The way you're recovering the body, then you have the wrong inverse. I just say, oh, I was hungry one hour ago. Then I say, no, it was not true. Your friend asked you to eat together. You see, so then I have an arrow while I do these forecasts. So what can I do? So this is the idea. You can use your letter again, but don't use it brutally. I use the neural network only in part of it. For example, when I say eat a burger, I split the information into two parts. One part is go to neural network, let the magic box do the job. The other one, I say, hey, let's follow some simple linear regression to have a controllability. So these two paths will be added up. I also have a side information over here. I say, hey, is there anything else that you didn't tell me, right? This will come to me. If I design such a structure, I can immediately mathematical algebraic equation to say I can invert it with one-to-one mapping without any information lose. What did you learn? You learn again, structures, right? You ask the neural network to do something specific that you cannot learn, but try to also have other environmental factors to control so that it's not going to overthink that you get one-to-one mapping. That's very critical. Any question before I jump to the third and last one? About splitting the information, inverse the logic of burger and hungry? Yes. Is there an example for like what the burger might be an analogy to in the power system? Like what would you want to invert? Right, yeah, burger means power. I mean, in my domain, I will try to make it better. And then the voltage will mean you are hungry. What does that mean? Think about it. If you have the voltage set up, right now we have so many power electronic devices. I remember, eight years ago, I invited Dr. Jia-Tai Professor Devang. He said, put my magic box here. Everybody will be happy. Of course, he's trying to help people how his box can do the job, but he controls the voltage. And once you control the voltage, there's a voltage difference. And you know, current go from the higher voltage to the lower voltage, right? So that is the causal relationship starting from voltage to the power. But of course, you can have another one. He said, I first tell him that I'm going to consume five units of power. I just put my computer here, consuming five units. I don't care what's going to happen. Then people try to adjust their voltage in order to serve you five units of power, right? But if they don't serve you five units of power, they are something back to your computer, but you don't know. That is power quality issue. We've had another lecture. I can talk to you offline more synergy. Divergent thinking. Sorry. Yes. I had a question in the inverse learning section. Sure. Just a clarification. You mentioned that you included linear regression with neural networks to prevent overfitting. Right. And why specifically linear regression? Why not? Right. You're right. It's actually a symbolic regression. As long as you have some feasible understanding, then that's good. If you learn machine learning, you typically learn Lasso, regularization. Regularize, so it's not going to overfeed. I use linear regression to make it easier because I know students are coming from different backgrounds with policy, like the first department. So yeah, that's the reason. As long as you have symbolic regression, you can use anything, quadratic functions and you can even use Lasso to penalize the parameter range on the parameter numbers. Those are the regularization that we can introduce all the time. Okay. And I can talk to you more later. Thanks. Sure. Okay. Divergent thinking. I put you into Africa. Can you still do a good job like Stanford? Maybe not. However, we can try to ask the agent to do the same. For example, maybe this is a little bit going beyond the normal requirement of this class for the fake image. I try to ask you, hey, generate 1,000 images. And then I try to compare, oh, this is the famous guy, maybe young, or this is not young. You try to say, hey, you generate the wrong thing, regenerate. So these two guys are working together to generate a fake image. So that is the mechanism again. However, you know, there's no control over there. Maybe generate a dog image, but the dog is very similar to young space for some reason. Just a happy dog or maybe a human dog. Then these two agents cannot know, right? This is because they only do the supervision. They didn't know the mechanism that young should have two eyes that is this far away from each other. But there's no rule to regularize. Then how can we ensure this is a human image? This is what I did. So I said, hey, generate, generate anything. But I put a controller here, right? After you generate an image, oh, you made it look into the eye. I said, hey, this is human eye. This is not some rectangular eye, right? So this generator in power system, this means quadratic function. That is what he was asking. I tried to put the regularization over here. I said, when you generate the information, don't generate the voltage and power centenaries. They have rich information inside. Generate the voltage first, and then use quadratic function to let it go to power, and then use the voltage and power together for your final image. You can think it in this way. Originally, it's generating your image of your face. Now what they are doing is, the first generating your eye, and second, based on where your eye is, my controller will generate where your mouse is, right? And then I try to compare. So then at least I can ensure your mouse is below your eye rather than sometimes the mouse go upper under eye. So that is the neural network design for physical guidance that I did. Mathematical stuff, let's go to the last one. Structural teaching. So far, we asked the robot to separate the information. So I know he's not doing some illegal stuff to me. Second, I asked him to reverse the logic so that he has self-meditation mechanism. I have the meditation mechanism to the robot. And third, I asked the robot to be flexible so that he can handle a lot of jobs that I never trained him. Finally, I want to give myself freedom. I want to give him freedom. I said, train your friends and do the job automatically. I'm going to sleep. So that is the last piece that I'm going to talk about theoretically and then I'm going to show you some demo, nice demo. Over here, this is all electrical and Navy ships. So you can see there are some circuits over there, some for parlor, some generator, gas generator. You can put a nuclear generator when your ship is very huge. But what people are asking us is can I send the ship to the ocean, to another country and stay there for maybe a season and then I will not dispatch anybody to maintain or maybe when you are damaged, I cannot send anybody, can you sustain? So that is what we try to train one of the robots to be super robots. I send it to maybe California coast. I send it to New York coast. I send it maybe to South Africa. But I cannot send the order ships anywhere. I train this one well enough. And then I say, immigrate your knowledge. How did you control your circuit? So it's super robots. When you have some of the circuit that is bad, how you use the other part to maintain maybe the capability to come back to your hub. So that is the transfer learning. Now the magic comes to the picture. So originally this is the graph, right? You can see they are stretching in different way. What did I do first? I said, hey, abstract. I don't care whether you're a huge generator or small generator. I don't care if you are super huge robot ship or you are very tiny, abstract your circuit to some graph. Second, identify yourself. Where are the generators? Oh, I'm here. I'm on the North. I'm in the South. Okay, identify the load. You have a laser weapon, right? Oh, I'm here. I'm on the other side. Good. I know. Property alignment. Third, please try to reshuffle your network so you look similar to each other. So what I did is I tried to put a graph in the middle. I tried to put a rule that is kind of aligned with each other. And finally, I said, cream your network. I don't want to see too many because I know people are different. I just want the core part to be exactly the same as my original design. Then I transferred the language, okay? And then you can go back to your original network. What did we learn here? Structure, right? You try to design your physical system with a structure and then if possible, put the neural network in a similar way so that in the end, you can transfer the knowledge with confidence. If in the end, you cannot have these, maybe this is one node, one node, one node, then you transfer nothing, right? So that's the balance. How much you reduce and how much you transfer, all right? My small part to show that I'm a professor. Sorry for the next five minutes. So we define a graph. So this is going to be the vertical nodes, the nodal part, the edge, and then this is the data. You can think about this as like you have a graph. You have some, maybe Yang sitting here and a node and then I connect there, I have an edge. And then over the time from one to capital N, I have different motions. Since I remember what I'm doing, and then I have my source grid that is maybe power grid in California. I have my target grid in Arizona. And I try to ask them to talk to each other. I say in California, five years data, recording all the critical events, too much solar, too many Tesla cars. Now, Arizona, five years later, you are going to face the same. Can you learn from me? Then transfer the knowledge there. What I do is I try to define the measurements, the X, the labels, like who you are, generator, maybe a small vehicle. And then I ask you, how did you connect with each other with a cable or one is charging? Ask them, what is your graph label? I'm normal of normal. So these are the matrices that I designed. How did I design the basic properties for each machine learning method? We typically has a regularization. We have a minimization or maximization. So this is my party. I try to have the transfer component analysis achieved by a trace design and the regularization. I know it's hard, just remember this part. I try to make the parameter to be limited. I don't want you to tell me too many. I don't care what you do every day and every hour. Tell me something, what? So then I'm not going to overfeed. And second, over here, it contains the kernel. I try to ask you, what is the data? Data voltage and power and then arrow. Okay, can you do a kernel? You say, what is kernel? The kernel is the treat. If, for example, you have two components in California and I have two components in Arizona. But maybe here, this is a big generator and a small car. There is a small generator and a big car. So that means I can now transfer easily. I need a kernel to blend the message, to do some normalization, right? To try to make a nonlinear relationship with your linear relationship. So that's the kernel part. And finally, I have my parameter to feed the data which generate the kernel and then I try to do the weight. So in summary of the source domain data, party domain data, I try to say, go to high dimensional space so that you are linear. And once you are linear, learn the mapping, then do the latent space learning W and the kernel. And then I add another one. Yeah, this is our human buyers. If you learn machine learning, there's a hyperparmeter. In Bayesian theory, there's a prior distribution. So that is what I put here. I say, I have some knowledge, what do you do? I don't want you to start from zero. Put some prior information here. What is the priority? What is not the priority? And then, yeah, the steps. I start with a random work on the graph. You may say, what is a random work? Random work is try to explore. I try to explore, can I go there? Oh, no, okay. Try to explore the other part. Oh, no, I try to sense what is surrounding me. Second, once I have a good understanding what is measurable, what is transferable, I try to make it concrete. Fix the kernel, go to high dimensional space, and then learn the parameter. Finally, put my buyers. I'm the instructor of some privilege to define some buyers and then do the regularization. However, in the end of the day, this paper doesn't work out. My students spend 10 days and the simulation is still there. You can think about it. This is very realistic, too much data and too much differences between different data sets. And then my student observed, what can we have? We don't have random signal in power of day, right? We have a line tree. It's shut off, right? Something's going down immediately and all the signal is the same. Going down and try to also do it. Then they are the same, right? The only difference is the mean and maybe the fluctuation, tendency, and pattern. But most of the time they just go here, go down, also late. So that means I don't need to learn from the beginning like 1,000 people, 10 million people together. I just need to remember what is the difference. Everybody has two eyes. I don't need to care about the eye. Maybe behavior differences is what I'm trying to learn. So that is what we're trying to say here. Different signals are very similar. It has its own classes. Then what we do is we try to coerce the learning. For example, you have five generators here, right? And I have seven. I'm proud. I have more generators. I don't care. Put one generator to aggregate what you have. That's enough. I have five loads. I have my Tesla car, my electrical, I mean, I have a laptop. You have five. I have seven. I'm proud. I don't care. Condense it to one. So this coercing stuff would make it abstract and condensed, make the learning much more efficient. And yeah, this is the idea. Finally, validation. Demo's coming. So before I show you the demo, I want to show you how we do the coding. For the coding part, you may say, just use Python. In my class, I tried to download TensorFlow package and tried to download some PyTorch. It's a wonderful result, good or bad. I can tell you for one hour. What we do is we know that it's true. But for power system, you need to code once. When you go to the water system, you code another time because our signals are very different than each other. When you go to the transportation system, you code for the third time. So this is quite inefficient. So what we did in this platform is we tried to propagate. Just tell me your graph. Tell me your input and output, independent of the domain knowledge. So this is what we had. We had the simulation layer. We tried to define the graph objective. And then we have a network redistribution model. So anything coming from the application, I say, refresh. Go all the way, as easy as the graph. That is over here, many graphs. And then on the graph, I do the Python graph, the editing, and do the graph replication. You have one power grid in California. I have 10 transportation systems in Arizona. Do the replication and handle all the machine learning part over here. And finally, go back to your application layer. So this means our code can do more than one application. This is the data set. And you can see Department of Energy send a lot of money for this type of analysis. So this is easily $20 million. Looking for the data, we'll use the utility and try to visualize how the data recovered from each other and do the machine learning. This is a picture I showed you, Pittsburgh, Pennsylvania. This is New York. Pittsburgh is to the west. In the west of Pennsylvania, we have some data partner. Visualize them, do the data cleaning, and then learn, transfer. This is a software that we did for every, because anybody know every? Oh my god, every is here, Palo Alto. It's the best research institution for electrical power. I'm sorry, nobody knows that. So they invest a lot of money to my group. So then we do the machine learning part near Snaper. Now you base decision tree, logistic regression, support vector machine, hybrid measuring. Then you can see we can do detection. We can tell you where the outage is happening, your solar panel or my solar panel. Then I can tell you what's happening is burned, or maybe it's short circuit. So we can, yeah, I can show you down now. This is a software and typically we load the information and then we pre-process the data. So that is in the graph layer. And then we try to get all the sensor loaded over here. And then we choose the method, logistic regression. And then we do the event type analysis and then the signal is coming. And you can see here, we have all the information over here. And now let me show you a streaming part. So this is a signal from open PDC. You can think about a signal from a real device. This is our plan for it. So over here, you can see the signal starts so from one computer. And then there's a lot of oscillation. And we saw oscillation over here, we know it's noise. So we didn't do much. We just tried to extrapolate the lower frequency signal. And then it's getting up, still noises, but suddenly it's getting to the right signal then we conclude it's time to pick out the signal. So you can see this is a forward, something bad happened. The voltage dropped from 114 something to 114 something to lower. And then you go up to the next cycle and then you can see we have some detection. What year, what month, what happened, what is the reasoning, and what are the plays I need to dispatch on the year like down to the field to do the repairment. And now I'm going to finish my talk quickly. How much time do I have? Is it all the way to, it's 30, 17 minutes. Right, okay. To 30, okay. I'm using horizontal time. Yeah, we're on the track. Where's my slide over here. This is another one that we did for Pittsburgh, Pennsylvania. We tried to load the data from the field. And with the data, we tried to learn one of the mapping rules, right? V to P, they probably get some conductives. You get some maybe connectivity. Then we use that to connect all the information together. The CEO said, this is the first time we look into the secondary, like how people are connecting with each other. He is very happy. But you know, what did I do? I just load their data, doing the learning and then reconstruct the information. It's very cheap. That's also how you can be a building layer. I mean, not me. I mean, a lot of software engineers to do that, right? Coding and then put it into utility and then load what information then they know where to prepare their solar panels. These are my lab. So yeah, this is an electrical engineering lab. We have the protection devices. Over here, you can see there are some many transformers. These transformers are mimicking San Francisco, Las Vegas and Phoenix. So there's a triangle over here. And then I have the conductives. You can see the yellow one. This is very expensive, but maybe the conductives between the two cities. And over here, this is a place where we treat or make an event. Sometimes we need, we have a fire alarm because it's burned. But anyway, we try to trim a line to say maybe the tree is touching the transmission line. Then it's burned. But anyway, the system is connected. Then both of you have a cascading failure. Then we have the protection device to make this connection here and there so that it is perfectly maintained. I have the OPPoRT. This is just a small system. This one is a hardware-in-the-loop system that can scale. It has circular insight, like FPGA. Have a real solar panel connected to the activity. And then, yeah, this is a solar panel where it has a micro-loop. And this is our validation. With ready data, actually systems of people on different continents from Europe, from Asia, they can understand what we are doing. And then I try to do the testing on small cases, eight buses, with eight students here and 123 buses. Three times of the students here. I also apply our algorithm on the utility dataset. Put it into the utility. I show you guys the big line. Remember? Question? No? Okay, sorry. So there's a question. And you can see that, yeah, for our method, they are full observable part, partial observable part and unobservable part. Full observable part, you can think about the place in Las Vegas, San Francisco. So there, our method, I mean physical method, like the old method is good. They don't need it, they fire me. They say, your algorithm is not even as good as our location. But what you find is when it is partial observable, the old method does not work because they always ask, how are they connected? Tell me the transformer location. So in this case, you can see our method is working very well. This is like 0.01, I can do all the way to 0.008 for the error, 20% of error reduction. Before I show you the conclusion, let me do another demo to you. And then I'm open to the question. So this is a platform that we did for the whiskey touch. It's much cooler, much more cooler than the previous one by my group. This is also visualization, you say, talk about machine learning by the end of the day. What can I see? How can I make money? But first you have username and you have the password you want in. And then you visualize your grid. The grid has different sections, different colors. You can have layers to look. If I'm engineer, I'm in control right here, I focus more over here. Then I have different layers to show what are the voltage issues, what are the maybe outage issues. And I can look into different topology so that the AI driven method is going to expand layer by layer. So it's like the AI is assisting me to look into a grid in a much better way. And then on the left-hand side, you can also look into the equipment generator, maybe a composite band, or maybe a load like a Tesla car, or maybe a power war, or maybe some verge devices. And then you can look into the load information. You can look into how the power is flowing virtually when you do not have enough sensors. And this is coming from three feet. And on the right-hand side, I can also visualize GIS information. If you have Google map, try the traffic layer, maybe the skeleton layer, maybe the virtual layer, you can choose how you visualize. We can do the same. And then you can also look around. On top, you can choose the time. You can also try to look into if the voltage average over the day, or you can say maximum. This is from basic data analytics tour. And then you can look into the phaser, phaser A, B, or C. This is critical, how you plug in your big load onto your grid for sustainability. And then you can also change your time and day so that you can look into the future and also look into the past. All right. Now I'm going to conclude. I mean, giving you time to question me anything. So today, what I show to you is we want to have a decolonization as soon as possible, at least my home. For that, we need to put a lot of renewable energy and power electronic devices. For that purpose, we will have more components that was never in the grid before. What we are targeting is 100% electrical energy transportation. With that, we get a lot of data, but we also get to travel on the edge of the grid to some remote area when the sensor is damaged. When the data has communication error, when the computer is not working locally, then can I still maintain electricity stability everywhere? You may say, yeah, I don't care. As long as it's not my house, it does matter. Think about it in the house. There's a big generator, you are a load. You are bad, I shut you off. Two days later, I pay you $10 and reconnect you. I apologize, I give you a coupon. But now there are many mini generators in your home and in the other homes. You are supporting each other. If you are out, this can trip another outage. And then it's like a blockchain, right? It's going to have a chain rule. So then it's going to have a cascade failure in a wide range. That is why we pay so much attention to have controllability even on some remote area. And that is why AI can pick in and to ensure the scalability with a cheap solution. Yeah, I hope with what I explained today, you are partially convinced that this method can help the decarbonization. I'm ready for any question here. Yes. So I'm not familiar with the background, but I know that it's possible with enough generators that you could have like our islanding effects. So like, you know, the grid is massive, but it wants severed as possible that that severed portion is still generating power. Yeah, that's right. Because would this be able to address that or manage that island? Well, that's good. Why should I address that? Like a horse can go, right? When he takes me, he tries to isolate himself. So the grid here is that he can still do operation for the patients. So islanding is not bad all the time. Sometimes it's good. When you are talking about islanding, I think it is more like, you know, I work with utilities. I want to do some maintenance. For example, they are rich people. They bought a lot of Tesla cars. Then I need to change the transformer. Otherwise I cannot support enough current demand that they are going to complain to my boss that I'm fired. But then I go all the way to the substation. I try to do the operation. But, you know, I actually cut off their power. But it's still energized. They created islands that have their power war, Tesla cars. So then I'm energized and then into the hospital. So that is a bad case for islanding. So it depends. But good question. Yes. On one of the last slides, when you had the metrics for your neural net for southern implementations, what was the MSc measuring? What was the echo part? Oh, this is mean square arrow. Mean square arrow there, we were measuring the mapping new arrow. For example, I'm mapping from V to P. When let's say it's a quadratic function. V is one, P is one. V is two, P is four. V is three, P is nine. So quadratic function. Then I try to hide it from the AI agent. People use the neural network to learn the same. And when you put one, you will put negative. You put two, you will not put four, 25. Then there's a mismatch. Then I use five minus four to the square. So it's positive. And then I try to minimize the arrow in my algorithm. But in the validation, I'm trying to estimate how long is the AI doing the mapping new learning? Mean square arrow for the question. What was the actual output of that? P. What is P? Power. Oh. Right. From V to P. Of course you can change from P. Okay, yeah. That makes sense. The root of P sometimes is V, right? So then, yeah. I tried to isolate and I said, everybody can do the mean square arrow estimation depending on what you want to estimate. You can also ask the fourth, right? Whether there is normal or normal. Then, yeah, that's another metric you can do. Yes. When you were talking about course in the learning process and you mentioned the node aggregation part. Right. You mentioned that we use this computing time. Yeah. Increased performance. It's very similar to regularization. In your training, it may not increase the performance. The training arrow can be bigger, but it is more towards the testing arrow, especially when you are isolating the training data and testing data with the validation data will vote to separate. Yeah, but the arrow can go up and go down. Our theory here is, yeah, when you are going to some operating points far away, this course semester can make it much more robust. For example, right now, let's say the voltage is 110. And then 10 Tesla cars comes over, then my voltage drags all the way to 90. So the system never showed that in the past. Maybe it's crazy, right? Then my course semester may have categorization to say, oh, 90, don't worry. I will first scale it back to 110 and then do the normal operation and then scale it back. So that is the course in part. Try to approximate the unseen scenarios to something you saw in the past. And I know it's more used to working with simpler networks, but why don't more traditional regularization methods like LCRL one and dropout were used? Were they not satisfying enough in this case? Of course, right? If they are satisfying, then I lose my job. So A0, A1 and A2, they are to some typical progress, something in the class, right? When I am talking here, it's a challenge that we are looking to at the genius dataset. If, for example, when you are doing the training, did you consider two different types of network and you need to do the transfer? Probably not. You focus on one dataset, maybe hospital dataset, right? And then say, looking to the image in the brain is sick or not sick, but the input is the same. It's a 64 by 64 images all the time, right? What I'm talking here is you are trying to change 64 by 64 into 1000 by one million. You can see it's not only going to be, it's also reshape. So that is what I had over there. It's a transfer learning that does not have much consistency. And that's the challenge, don't I? I cannot use A0, right? I say, use black car meter. It doesn't make sense. You need more car meters, it's a bigger network. Thank you. Sure. All right, J1, hi, Yang. For your PMU, how are you? Okay, let me read loudly. Hi, Yang. Great to learn about your work. For your PMU work, how do you make decisions on whether and where to dispatch crew if you find anomalies? Let me, thank you. JY, depending on the PMU number, if I have only one PMU, then I can only tell you, is it to the east or to the west, right? I can only tell you a region. I cannot tell you, oh, it's there. You only have one sensor. So what we do is we try to divide the system depending on how much of a mobility you have, right? So first one is to the east, to the west. If you have 10 sensors, I can put the 10 corners. Oh, this is June 1, so there. So if you have more PMU, then I can narrow down and send the crew more accurately to the field. So yeah, depending on what you have, the AI will also adapt to find the location. If you have measurements everywhere, I can tell you it's exactly next to YTE2, that red place. Yeah, that's my answer to JY. Thank you. Any more questions? Yes. About the Hitachi video that you showed at the end, you mentioned that you can also add GIS information into it. Is that only for visualization or can also these GIS data be used for other purpose? Yes, yes, it can be useful because let's say I try to identify a solar panel. I say, who has the solar panel? Because somebody, there are people in some states, they do the solar installation, they don't tell the utility for some reason, maybe for economic reason. Then you want to find the solar panel by Google Images. Then what we do is we look from Google Images and try to say, oh, it's there, and then the solar panel, you can not find, right? It's a very cheap solution. So this is one application, GIS information does matter. Second, if I want to, for example, let's say you get $1 billion tomorrow, you try to buy some fancy cars, then I need to understand how your house is connected to which transformer. Then GIS information is very important because your house is not going to connect to some substation in New York, right? So then I will actually not only use the latitude and longitude to have the euclidean distance to help me decide who is connecting to whom. I sometimes even use the covation of the Google map, for example, the poles typically are on the road, right? To the two sides of the road. I try to calculate this covation, how this is going to grow, like a snake. But that information is also one kind of GIS, like how people construct the buildings and the paths in the past. There are more, but GIS is definitely something important because we are living in a 3D place, right? That's how we're involved. It's like a free sensor. Thanks. All right. Okay. Thanks a lot.