 Welcome to the first smoke-free seminar for this quarter, spring 2021. Our speaker today is Professor Fanxing Li from the University of Tennessee, Northview. Before we introduce the speaker, I want to show you this the least presentations for this quarter. Our next seminar is next week, same time 230. The speaker is Professor Ross Boddick from UT Austin. This quarter we have some speakers from China and Australia, so it should be very interesting. We're very happy to have Professor Fanxing Li to speak with us today. He received his Bachelor and Master's degrees from Southeast University in Nanjing, China. He had worked at APB Consulting from 2001 to 2005 as a senior engineer and then a principal engineer. He's currently a faculty, a full professor at the University of Tennessee, Northview, where he is the James McConnell professor in electrical engineering. His research interests include renewable integration, demand response, power markets, power system control, and power system computing. So today he's going to talk about a large-scale test bet for power grid controls. So Professor Li, I'll pass the, let you share your slides. Thank you, Chen Wu. Good afternoon, everybody. First of all, I want to thank Dr. Liang Ming for inviting me to give this talk for your smart grid seminar at the Precourt Institute for Energy at Stanford, and I'm very honored to be a guest speaker here. I also want to thank Chen Wu and Vahila to provide the logistical support to my seminar and Chen Wu's introduction. The title of my presentation is Current Large-Scale Test Bet, LTB, as a virtual power grid for closed-loop controls in research and testing. This test bet is commonly known as the current LTB in the rest of this presentation and in the research community in electric power system. This work of current LTB is not possible without the funding support from Current, which I will mention a little bit later on. Current LTB is a big project by a great project team. I'm very grateful to our team with several main contributors as follows. The first one is Dr. Kevin Tomasowiek who co-leads the current LTB project and is also the current overall director. Kevin initiates the idea of developing current LTB back to 2015 and then asked me to drive this work. I'm also grateful for his trust of me to lead the effort. Also, I want to thank Dr. Han Tao-chui who is the chief technologist for this effort. Han Tao is my former PhD student and now the research assistant professor at Current and he is a very brilliant young man and will be a assistant professor, tenure-track assistant professor at Oklahoma State University starting this fall semester. Also, I want to thank Dr. Joe Chau and Ali Abu as senior technical advisors for current LTB and certainly many other faculty members and graduate students with Young Current contribute to the work as well. I'm really, really fortunate to work with all of them on this current LTB project. Okay, next I want to borrow a minute to briefly talk about my department and my university. My school is the University of Tennessee at Knoxville at UT or UTK. It was established in 1794, even four years before the establishment of the state of Tennessee. So we have a really long history. We are best known to general public for our magnificent stadium, which can hold more than 100,000 people. So it's our landmark. If you have a chance to visit us here in the future, I will definitely show you around our campus and this huge building, huge stadium. Our department of EECS is located in the New Minkow Building, opened in 2012. We have about 50 faculty members with four US NIE members and 10 plus ITV fellows. In the power area, we have five professors in power system and five in power electronics. And the department is the host of and the headquarter of current research center. Okay, our EECS department current existing undergrad student enrollment reached about 800 and we have 170 PhD students. Our research expenditure perfectly reached more than half a million as of fiscal year 19. So we are ranked pretty high in some barriers in terms of PhD students per faculty members and research expenditure per faculty members. Okay, all right. So now let's talk about the current C-U-R-E-N-T is a single R, okay. So it stands for Center for Out-Wide Area Resilient Electrical Energy Transmission Networks. See, like I mentioned, excuse me, it's a single R. You know, university engineering professors couldn't do spending very well. So when we realize we missed one R, then that's too late. Okay, so now we try avoid to talk about the current versus the currently, okay. All right. Anyway, currently is funded by US National Science Foundation engineering research center program. And the funding comes from both US NSF and Department of Energy. Excuse me. The lead university of current research center. Yes, University of Tennessee with three partner schools, RPI, North Houston and Tuskegee. Our base budget is $4 million per year from 2011 to 2021. Yes, this is the 10th and the final year of current. It's the first and the, excuse me, the first and only NSF ERC dedicated to bulk power system research. And we have more than 30 industrial members. Okay. Here's some introduction of current vision. It is to develop control and modeling approach for a nationwide grid, which is fully monitored and dynamically controlled for high efficiency, high reliability, low cost, better accommodation of renewable sources and full utilization of storage and responsive load. Also from the educational side, our mission is to produce new generation of electrical power and energy system engineering leaders with a global perspective coming from diverse background. Here is an overview of the systems on the current research scheme, you include three planes, fundamental knowledge, enabling technology and engineered systems. And the agent act with the four different research threats, monitoring, modeling, control and actuation. Here that the engineer's system basically includes hardware, testbed or HTTP. That's another testbed and large-scale testbed, which is software-based. Testbeds will drive the center's research program via system specifications and closed-loop scenario studies. Background, why current LTP or why we want to build this large-scale testbed. The researchers in power system community often struggle to obtain realistic data for research since we don't really own or operate any power grid. Also, researchers like us need to use multiple tools to manually create interactions among different software packages to achieve our research goals. For instance, if someone wants to test a real-time control algorithm, he or she needs to use a dynamic simulation tool to produce mock measurement signals that will be fed to another piece of software as control inputs. And then the output from such a control module will be fed back to dynamic simulator to verify the control algorithm. During this process, either a manual or some scripts need to be developed to facilitate the closed-loop interaction. So our goal is to make this process automate so then the user can make one button click or maybe a few button clicks to finish the closed-loop interaction. Here the closed-loop automated interaction is really the key. This differs this large-scale testbed from typical simulation tools. Meanwhile, we also want some standardized models and test systems with high-penetration renewables for easy benchmark among different research results. So in short, LTP is closed-loop simulation platform plus large-scale system models. This slide shows the overall architecture of LTP modules. On the top, we have the current models with HVDC overlays as well as wind scenarios for our research. On the bottom, we have modules representing conventional EMS functions such as state estimators, dispatch, and visualization modules. On the left, we have possible simulation engines for power system dynamic studies, such as our young house ANDYs tool, or opera artis if there's a grade-dine, LL, Lawrence liberal national labs, grade-dine tool, or other customized modules. On the right, we have this new research algorithm and controls which represent the research results from the users of this LTP. All these four blocks are connected by data streaming and communication network module, which I will elaborate later on. Here are some design considerations, such as interoperability with modular architecture. Also, we want to consider measurement-based control interactions to simulate the PMU sampling and the streaming. We also need large-scale models, such as a 1000 bus North American power grid model with projected HVDC overlays. Here, I want to talk a few words about the grid simulation engine. Right now, we have our own young house Python-based tool called ANDYs, which has a building library for fast prototyping and so on. ANDYs is like a white box to us so we can do whatever we want for fast prototyping of modules and routines. Also, we have interface to other research tools, like grade-dine, developed in C++ with features like a connection with Madelika and high-performance computing. Also, we interface with commercial tools like OPRT-EFSSM, which supports Madelika Python interface and so on. Our goal is to use our own white box Python-based engine ANDYs for fast prototyping, in a relatively smaller system, like a few hundred bus to a thousand bus. Then, once verified, we can utilize more commercial grid tools for much larger-scale verification in the scale of tens of thousands of buses. Here, I want to talk a bit more about our young house tool called ANDYs. Available functions include power flow for AC-DC, hybrid system, time-domain integration, icon analysis, and some plotting data-streaming interface. We also have unique features like symbolic modeling. For available models, we have 49 models included, such as general gen-class and PWA models and wind turbine models. We support several data formats like our own ANDYs data format or PSSE and MATLAB format. Supported test systems include MPCC 39 to 140 bus system and also WAC, EI, and ERCOT system with 50 percent wind and 30 percent PWA models. Open source distribution can be found here. In the past couple of years, we developed a hybrid symbolic numerical framework to enhance and improve the previous dynamic simulation engine of ANDYs or this is basically the new generation of ANDYs. Symbolic programming basically means that writing code for complex mathematical equations is just like typing an equation in word. Once you follow the symbolic modeling language, then we have code generation which will generate the code such as solving power flow, solving Jacobin, solving numerical integration for persistent dynamics. In other words, once you type the equations, then the symbolic library will generate the Python code for the programmer or the user of this tool. This significantly reduces the effort of researchers so they can focus more on research works on their own rather than spending a lot of time in coding and debugging. Certainly, we also need traditional numerical programming in the loop for some maybe customized functions. This is shown in the blue blocks here. The yellowish block represents the symbolic programming part. That's why this approach is called a hybrid symbolic numerical framework. As the engine is really the critical part in this platform, we made a lot of verifications for our simulation engine. Here it shows our benchmark with MBCC 140 bus system using TSAT, which is a commercial tool for power system dynamics. As shown in these two diagrams, dynamic simulation results match very well with the results from commercial tools like TSAT. The left diagram shows the frequency at bus 22 and 25. We're supposed to have four curves, but we seemingly have only two curves here. The reason is that the two curves from ANDIS and TSAT match with each other very well, so they're fully overlapped. Then that's why it looks like only two curves here, but actually there are four. The same results can be observed for the right diagram for the terminal wattage at bus 21 and 25. Next, I want to talk about the LTB-PMU module. This slide shows the measurement model built-in LTB called LTB-PMU. After the simulation engine performs the dynamic simulation, the ideal results will be processed in the following four steps. First, we added a delayed model. Then we added some measurement noise into the data, and then we may consider loss of data at some probability. Then we obtain the realistic data, or non-ideal measurement data. Then we can send this non-ideal realistic measurement data to other modules like a communication module or other modules for further processing. This slide shows the LTB width, which is a web-based realization. For power system researchers, the visualization tool helped us test our research code and identify problems or even motivate us to find new research ideas. Later on, I will show a few short videos, which you will get a better idea of the realization tool. We also developed several system models and various wind scenarios. We started with NRELs, National Renewable Energy Laboratories, Wind Speed Data for Future Wind Farm Locations, and Wind Dynamic Modeling Guidelines. Eventually, we built the 50% wind penetration in WAC and 50% in ERCOT and 50% in EI for wind, as well as 30% solar penetration on top of that as the future scenarios. Certainly, we built many other scenarios in between 0% penetration to 80% renewable penetration. All these three systems are integrated together to form what we call the Thousand Bus Current North American System. We also use MISO, Midwest and Mid-Continental ISO study results of the multi-terminal HVDC topology, as shown in the figure here. Next, I want to talk about the data streaming part for the LTB. We basically have two approaches. The first is the dime. Dime stands for distributed message environment for passing data between asynchronous heterogeneous modules. It is a point-to-point data streaming approach for faster prototyping. For example, if one module wants to send something to another module, so like an LTB PMU may want to send something to realizations, then this can be done with a dime module. Also, we have an LTP net. In the LTP net, we employ a tool called the MiniNet to develop the communication network details for LTB. It's based on standard IP-based streaming or detailed communication network. Next, I want to discuss a little bit about the dime. It is a Python-based transparent streaming server, and it supports unlimited MATLAB or Python or C++ clients. Developers can import a dime API and then gain a streaming capability. In this diagram, here we show LTB widths here on the left and then the open PDC on the right and the other modules on the here. That's the NDs and other modules here. So as long as you follow the streaming protocol and the streaming server will help delivering data from one to another. LTB net, this is the emulation of communication network. In the last few years, we completed software-defined communication network emulation, which is now integrated in the loop of LTB. From the viewpoint of module dependence, we have a four-layer architecture with a physical power system layer at the bottom, and then measurement layer, communication layer, and the application layer like EMS functions or control functions. About the communication topology, we constructed a nationwide communication network. I want to mention that there's really no publicly available model about nationwide BlackBone communication system. So we use this ACM report here to develop our US communication network and use it to emulate the communication system in the current grid. So now we can bring all the pieces together to form the data flow diagram in LTB. We start with the large-scale system model and high penetration scenarios. Both are fed to NDs, and then the dynamic simulation results or the data output will be fed to LTB PMU through DIME streaming server. At LTB PMU, we create realistic non-idea measurement data from the idea data, basically by embedding noise, embedding some communication delays or missing signals. So then what we get here is realistic measurement data, and that measurement data will be forward to different modules such as OpenPTC for storing the data and or LTB realization to realize the system. And also it can be forward to the new control algorithm. That's basically the researcher's results. And then in this new control algorithm, based on the research, maybe some parameters will be modified. So then that modified parameters will be sent back all the way through DIME server back to NDs. So then the NDs will make adjustment in its dynamic simulation, just like in reality, the control signal being taken and actions being taken. So then that will make the system perform differently. So then the new updated dynamic simulation results will be sent to LTB PMU. And then here there is some processing to generate non-idea realistic measurement data. And then we repeat this loop. So everything happens within this loop and happens in real time. So that's why we call this a closed loop simulation. This is really the fundamental difference of this platform from maybe the simulation tool. For a typical simulation tool, it is open loop, but here it is a closed loop. So also I want to mention this new control algorithm is based on researcher's results. So we basically provide this platform as an environment for the researcher. So then they can focus on their research work rather than dealing with all the simulations and mock measurement creation. Now let's play a quick demo to illustrate what the LTB can show you with this LTB visualization as front end presentation. So left side shows the uncontrolled case and the right side shows the controlled case for WAC system damping control with wind generators. This case study is a good example showing the LTB platform can demonstrate the advantage of new control algorithms. Also it helps researchers to identify and view the simulation results Okay so after some disturbance there's the system oscillation and then this controlled case will bring the system back to steady state a lot quicker than the uncontrolled case. And also we have a plotting window to show the timeline flows. We can also see that for the controlled case the dynamics dies out a lot quicker. So that's why we call this research demonstration platform. The actual algorithm about this research can be found in this paper below here. And also I want to mention that it helps researchers to view and identify problems during research for example. Without this front end realization it is very difficult for someone to find out what's going on or what's going wrong among the enormously large volume of data at hundreds of buses and at the data rate of 20 or 30 snapshots per second. Next I want to show another quick video of projects that deliberate cyber attack may cause the system to separate into two islands which are not necessary. The blinking dots are the PMUs with malicious bad data injection. The right diagram this small window here shows the maliciously injected data causing operator to think the system frequency deviates from the threshold to trigger a special production scheme of separation. Which means the system is actually good but because of the false data injection attack the operator thinks there's something wrong and then started the system separation which is not necessary. All right okay let's move on the third and the last short video demonstrates the impact of communication delay on control. The left part shows the case of no delay and the right case shows the 300 millisecond delay under denial of service attack for example. Let me play the video. So after some disturbance happens the previously working algorithm assuming no delay will not work you see here. So without delay when everything communications idea it works but with 300 millisecond delay then the algorithm doesn't work. So this is as an example to demonstrate what this platform can show or can help researchers on their research works. Here's a summary of LTP achievement. We've finished a number of modules including ND simulation engine dime server, LTP PMU module, a number of current test system, LTP net for communication system emulation, LTP visualization and also we finished a number of research integration for demonstration and I'm also pleased to report that currently LTP won the 2020 R&D 100 award last year. Here I want to briefly mention LTP as a driver for research as a platform development researchers are main developers okay so and also we have research driven development and then in turn the platform enhanced the understanding of the models methods and the systems and we also have large-scale system and scenarios which are used in research case and inspire research works that use data to collaborating our models okay. Finally I want to summarize the difference of traditional simulation studies versus LTP based approach. In terms of prototyping, traditional approach is a more open loop based on MATLAB or Simulink. LTP approach can integrate many different programming languages. Right now we have MATLAB, we have Python and C++ for different modules. Data interface, we have online data streaming between heterogeneous modules like I mentioned previously developed maybe in different programming language and a traditional approach is usually offline or no automated interface. You have to do it manually. Communication network typically non in traditional simulators and we have a built-in LTP net. For closed loop testing and systemic testing usually it's a manual simulator controller loop however in LTP we have real-time closed loop testing environment and we have ready-to-use modules. Okay so this one shows our virtual control room at the current research center located at the fourth first floor of Minkau building in the Department of EAS, University of Tennessee. Here are some ongoing users of LTP. We have five software maintenance support agreements signed in the last year, academic year with users from universities, national labs as well as university users. You know also played a critical role in for funding such as in six externally funded projects totaling 4.5 million dollars and also it's being used in one of LLNL's project and it also supports a recent NSF career project by one of our faculty members Dr. Pogar at UT. Here's the roadmap for LTP as you may see we have taken consistent progress towards the GEN3 goal that's the NSF terminology. So since this is our year 10 so we're basically here okay we you can see that this progressive effort we started several years ago from 20 percent renewable penetration to 50 percent to 80 percent regarding the scenario development and also in terms of LTP platform functionality we developed a lot of functions over time and now the LTP is quite a mature tool and right now it's in the maintenance mode. Some future directions certainly we need some minor enhancement such as enhancing load models for large-scale systems. Right now we have the IP and motor load model and maybe we want to enhance it with some demand response model there we want to refine realisation for various use cases we want to improve the computational efficiency and some possible larger enhancement may include integration with our hardware test bed which is a hardware-based test platform we may link it with economic and market tools and we may link distribution system simulation models to make a co-simulation environment we are seeking some external grant to a larger grant to develop major modules like this bottom three shown here. LTP is available at Good Hub and this location is open source well-documented and the majority of this content can be found in this paper published in IEEE Power and Energy Magazine and also we have several related technical papers shown listed here and this acknowledgement we want to thank our funding support from US NSF and DOE as well as our project main team members um we want to thank our industrial members that's here and this is the presentation now we're open for Q&A turn on thank you for the presentation Professor Lee I think there is one question in the Q&A how is the false data injected I think you're talking about a cyber right yeah demonstrator attack realistic that is how you see this attack oh okay actually we do this research uh it's assumed that the false data injection attacks is already there so we didn't really discuss how that can be implemented however because well because this is a really the platform we presented this platform either can do such such things it's up to the researchers in cyber security to figure out what's going on related to false data injection attack but I want to mention that the cyber attack did happen in several grids the most well-known one is a ukrainian system collapse and it's very likely a cyber attack and also it happened in a brazil grid so it is getting more realistic and it's happening okay all right I I think we can look at this question in a different way how how did the researchers simulate the data is it based on some realistic data they obtains oh yeah so for example they can hijack some previous events that basically shown here so if the attacker obtains somehow obtains the system variables in previous events that you you really had to separate the system and then they inject the system so then the operator may think oh there's something bad happening so then we need to separate the system although you don't really need to do that or the things could be even worse like the hijacks the data and then you inject very misleading data or bad data and then makes the operator think something really severe happening then they may trip the load and or trips that generator so I see okay uh yeah second question uh so could you please clarify how you validated the generation of synthetic data um okay so uh based okay so I'm not 100% sure about the question so I guess you mean how do we verify the results is good so we basically verified and benchmarked the dynamic simulation tool with the uh several commercial tool like a psse and a t-set okay and and this some of the test system uh are well benchmarked with the real system operation you mentioned at the end of your presentation there are currently five users yes so so let's say if Stanford wants to be a user how how what kind of procedures okay it's uh actually uh it's an open source so you can just go there download it and then follow some open source uh agreement and for those users uh they basically they want to get the support so the agreement really the maintenance and the support agreement are there real data involved you know the archive in the system that uh what do you mean real data real real data from the actual real grid operation yes no not really because many times that data very are very sensitive so but we do have the the North American system and different versions of that we benchmarked with as much as we can with commercial tools and some typical like events we well known event and we know what's the consequence so then we perform the study make sure it produced results aligned with uh what's uh happening in the real system yeah so that's probably the way for this industry i see any questions from the panelists liang do you have any questions oh there is one more can we get some economic application economic application so i assume you mean uh the uh economic dispatch or some economic study for power grid so if that's the case i can talk a little bit about actually our ongoing project we we develop we are developing uh uh economic dispatch up to five minutes interval so then we actually had a preliminary study of the economic dispatch so we can show that dispatch results are in the realization platform but we because in the demonstration we can let everybody wait five minutes to see the updated results so we have some function like a speedy the display so uh in the demonstration we update the the five minute dispatch results every two or three seconds so then the users will see uh some continuous control map changing every two or three seconds and uh that's uh being used for one of the ongoing projects uh we should have that module integrated uh hopefully uh by the by the same next year this time thank you if there are no more questions let's thank the speaker for the wonderful presentation thank you ching and hila and also yeah thank you all yeah okay have a nice day yeah thank you bye bye all right