 Hi, I'm Lucien. I'm with RT, the Power Transmission System Operator in France. I'm also involved in LF Energy as chair of the board and at RT, my main role is OSPO, Open Source Program Officer. Today I will do the presentation together with Aurélien. Aurélien, if you want to introduce yourself. Hi, yes, I'm Aurélien. I'm working with RT as well as a project manager in charge of the virtualization of the substation control system. So the purpose of our presentation today is to talk about a recent project that we are launching under LF Energy umbrella and that deals with the virtualization of real-time power substation automation. So before stepping into the topic, a few words about RT. We operate, maintain and develop the French Power Transmission Network. There are about 8,000 people working at RT. We are not an IT company. Most of our people are doing the same as what is shown in the picture, so working on the field. However, two years ago, RT decided to reinforce its internal development capabilities and we also decided to embrace open source at one pillar of our software development strategy. This is motivated by the context that is changing and we need to adapt to the new context. Meaning the context of the energy transition. So we have to face the growth of renewable energy sources such as wind generation or photovoltaics. We also have to prepare to new uses that are growing such as electric mobility. And we also expect that there will be more responsiveness from the consumer side to market signals or scarcity signals. So there will be potentially more smart services from a growing number of third parties scattered over the network. So in this context, we have to adapt our grid control architecture with a multiplication of distributed controls. These automation systems will have to adapt to more diverse situations in a more dynamic manner. Also, data will play an increasing role. For instance, we are facing an adding infrastructure and in order to optimize better our asset management, we want to rely on more data, applying analytics and new technologies such as AI. So in this context, we are thinking about a new generation of grid automation systems and basically there are four challenges. The first one relates to innovation. We have to integrate new functions and technologies. Most of them are IT technologies in order to fulfill the needs of the future. Time and cost efficiency will be a key challenge as well. This is due to the fact that the new energy sources, renewable energy sources, are developing much faster than the conventional coal or gas power plants that we had in the past. And of course, the bill for the end consumer is very important to us. So we want to keep it low so we have strong budget constraints. Another challenge will be cross industry collaboration. In an automation system, we want to integrate products from various vendors. And of course, there is already those products rely on standards that which purpose is to ensure interoperability. However, by experience, we know that standards are not sufficient to ensure plug and play interoperability, and that there are significant integration costs, interoperability costs, and this is also time consuming. To solve this one way is to specify our own interpretation of the standards, but this is not a solution either because if each customer does the same and then the vendor is confronted to value specifications, they need to adapt their products and that is not efficient. And the last challenge is a scalability and flexibility. So on our grid in the future, there will be more diverse situations between areas, for instance, where there is a lot of renewable or wind energy developing and other areas where there will be electric mobility, for instance. So the solutions will need to the automation systems will need to adapt to these diverse situations. At the same time, we want to still want to benefit from a scale effect. So not paying too much costs, investment costs, but also maintenance costs to specialize systems. So that's what we mean by an industrial telomate solution. To meet these four challenges, we have identified three mainstays. The first one is to build a modular and interoperable architecture based on standards. The relevant standard in our case is IEC 61850. But as I said, this is not sufficient. Now we are turning to open source and virtualization. Also inspired by the experience and the lessons learned from other industries such as the telecommunication industry who moved to virtualization and open source some years ago, already some years ago. So this context led us to the CPAS project. CPAS stands for Software Enabled Automation Platform and Artifacts. It's a very recent project of LF Energy. Maybe to start with a bit of history of this project. We started Q3 in 2019 with a call for collaboration under LF Energy, umbrella. And after this call for collaboration made by RT, it was decided to form a design team. The scope, the aim of this design team was to build a common roadmap for an open source project. And several customers and technology vendors joined this design team with the principle that there was no commitment to participate to the future project, but only to spend some time to think about what could be a joint roadmap based on each party's vision and needs. So this design team worked between January and June 2020, and it delivered the initial roadmap of an open source project. This work is involved, as I mentioned, both end users and technology vendors. So Adventek, Allende, GE, National Grid, RT, Schneider Electric participated in this work. And the resulting roadmap has been released under a creative common attribution license. Further to this work, the design team stopped. And some parties decided to continue with an open source project. And it was submitted to LF Energy for approval. So I'm pleased to inform you that last week, LF Energy's board approved the launch of the CPAS project at incubation stage. The mission of the CPAS project is to develop reference design and industrial-grade platform, open source platform, that will have to run real-time virtualized automation and protection applications. By protection applications, we mean applications that are intended to protect the grid, the assets, but also the people in case of incidents, such as short circuits on the grid, or, for instance, lightning striking power lines, et cetera. So these are applications that have strong time constraints. By reference design, we mean having a platform that can be used by various technology vendors to build their applications, their automation applications, and check the fulfillment of requirements, check the interoperability, et cetera. By industrial-grade, we intend to have this platform as a very good basis for commercial products that will operate on the real power grids. The first use case that we foresee is related to the power grid industry. However, we also see potential applications beyond power grids still related to the power industry, but it could be, for instance, for power plant automation systems or automation systems on the consumer side. The project will encompass several activities, starting from the specification of the functional and technical requirements to be fulfilled by the platform, specifying also test procedures to assess the fulfillment of the requirements. Then building the appropriate system architecture and the aim here is to reuse as much as possible existing technologies from other industries. Of course, there may be the needs to develop specific functions for our industry based on our special needs, and then we will also have to define and implement APIs for external applications. As I mentioned, the scope of the project is related to the platform, not the application, not the automation applications. However, we want to ease the integration of such applications on the platform. And also, despite the fact that automation applications will not be in the scope of the project, we foresee the needs to have some realistic proxies of such applications to be able to test the fulfillment of the requirements within the community and to facilitate the identification of improvements. And last but not least, the project will also define guidelines and best practices to help technology vendors to integrate, test, deploy and maintain systems based on the platform. So I will stop here and leave the floor to Aurélien. Thank you, Luciana. So now let's move to the technical vision and concept that we have. So first, I would like to start with the state of the art. So let's take a deep in an actual digital substation to see from where we start, and then it will be easier to understand what we want to do. So what is a digital substation? So poor grid substation. This is where all the data from the high voltage equipment such as line transformer are collected. This is also the place where you will be able to control those equipments. And the place where you have all the the intelligence like protections to protect those equipments and also people. So the way it is done today is you for each high voltage equipment, such as, for example, a transformer or over the line, you will have a dedicated bay. And in this bay, you will find a SCADA feature to control it and to get data and also protections and automations. Then you will have LAN and substation level features such as administration, SCADA and monitoring. And then all those data are sent to the control room. So it could be a national control room or regional one from where you can monitor the grid. So if you see the picture on the right side, you will see that so there is several bays and it takes lots of space. Let's focus on bay equipments. So when you are on bay, you have three type of equipments. The first one is related to SCADA. So in digital substation, you will get as an input voltage and current with a sample rate around 4k hertz. And then you will convert those measurements at a lower sample rate, like one measure every second. And then you will provide those information to the upper level. You will also get like a binary and analog value that will tell you what are the states of the high voltage equipment. For example, you will know whether the second breaker is open or closed. And the operating time that is expected from this kind of system when you send a control like open the second breaker is from 10 milliseconds to several seconds depending on the situation. Secondly, you have automations. So automations basically they will take the same kind of input. And the output is binary. So like, okay, if I see on a transformer that the load is above this level, I should act this way. And the most critical one are protections. So as Lucien said before, protection are to protect the equipment and also people. So it's very critical. And those are real time equipment. It's mean that the operating time should be respected. So as input, you will take a voltage and current that reflect the state of the power grid. And as an output, you will decide to trip or to open a second breaker to protect people on equipment. So we've talked about the equipment that we'll find. The question now is how do they communicate? So Lucien talked about the IC61-850. So in the power grid, the IC61-850 is the standard that is used for two types of things. The first one is to, it will specify the data model. So the objects and services that may be possible to communicate with the equipment. And then you have the second part that is about how do I communicate those data. There are three ways to do it. The first one is client server using MMS over IP. So basically you will use this kind of communication for non-critical operation. That's what we call reports. So you want to get information about the equipment, you can use that. The second one is Goose. So Goose messages are faster messages that I use, for example, to open a second breaker. So if I am a protection and I want to open a second breaker, then I will send a Goose. So the sample rate is higher. And the last one is the sample value. It's the most critical one to handle because those are real-time message. So if you lose one, there is no way you can retrieve it later. So sample value are used to have information about voltage and current. So far we have seen how it is today. Now let's talk about the approach that we have in the CPAP project design team. As we aim for a more flexible and cost-effective solution based on a common platform. So what's the idea? So before we talk about the substation, so basically the idea is to keep only one platform that will host all the features. So instead of having several base substation level and so on, you will have one common platform where you can put apps. And those apps are monitoring apps, protection apps, control apps. Since you want to have a high availability, you need redundancy. So you could have one, two, three platforms depending on the security that you want to have. What is interesting is once you have this platform at this level, you can use it at another level. So for example, for integration, you can build a digital twin more easily that reflects exactly what we have in the substation and can be used to perform tests. So it will be easier to make integration and to deploy. And also you can sometimes you have functionality in Power Grid that are centralized. For example, automaton that will monitor the not just one substation but several substation. And then you can also use the same platform for this. Okay, so let's deep into the platform. So what we think about is basically you have in green the application that you want to host on the platform that can be developed by vendors or industrial or university and so on. What is interesting is you could share common services on this platform like how will I deploy my functionality. So administration, you can share your services like 61-850. If you don't have a 61-850 stack in your application, it will take charge of the redundancy, network administration and so on. And now let's talk about the hardware. So the idea is to use non-specific hardware like classical x86 architecture. The only thing that is very important is to have a network card that is compliant with DPDK. Why is that? Because we realize that if we wanted to be able to perform a good performance with the network services that are provided by the 61-850, there is lots of data and you want to have a low latency to ensure that the protection will have good behavior. So you need to have direct access to the network card. That's why we use DPDK. So now let's take a look at the operating system. So it's a critical system. So you want the system to be as small as possible. And we found that using Yocto to build it was a good way to ensure a minimal surface of attack. So we use Yocto KVM as an hypervisor for the network. As I said before, DPDK is important if we want to achieve the performance needed. And we use it with OVS, so OVS DPDK. And since we have real-time application like protection, we need to have a real-time behavior. So that's why we use Linux, but with the patch-primed RT. And for the high availability and redundancy, we use a pacemaker called sync-unsafe. So what we've done so far is a proof of concept to see whether the technological choice that we've made were a good fit with our needs. So the first thing we wanted to check is whether this kind of architecture with this kind of operating system was good enough for the real-time application that we have. And we wanted to compare this with a system without a real-time patch. So if we talk about the hardware that we took, we have a 14-core Xeon system with 32 GB. So on the right, we use a cyclic test with a non-preemptive system. And we see that the maximum latency that we have on the host is 500 microseconds. Before we talk about voltage and current acquisition, and we say that the sample rate was around 4 kHz. So it's one value every 250 microseconds. So if we compare this number with the 500 microseconds, we can directly say that it's not enough. So what about the same case but with a preemptive system? With a preemptive system, we achieve a 21 microseconds latency. That is compliant for the host and our application. But what we've tested so far is the host. And as I told you before, what we want to do is to be able to have a third-party application on the platform. So that can be in a virtual machine. So basically what we've done is the same test but in the guest. So we have the host that I presented just now. And on top of that, we put a virtual machine without KVM, of course, OVS and DPDK because it's not needed in the guest. The guest is also built with Yocto and the kernel is preemptive with the patch preemptive. And we see that we achieve a latency within the guest of 149 microseconds. So it's still compatible, still compliant with our case. So next, for the proof of concept, we decided to test the architecture, the redundancy. So as I told you before, we checked the latency of the host and now what we wanted to do is to see whether we were able to ensure the right level of redundancy. So we build a cluster of three hosts. So the three hosts have the same Linux distribution based on Yocto and preemptive and OVS, DPDK and so on. So two of them were used to host a virtual machine. And the third one, the observer that you see, is just used to have the quorum in the cluster. On top of that, what we wanted to do is for the digital substation, the digital substation don't want to see the IP address. They don't want to give an IP address for each virtual machine, the active one and the passive one. So we use CoriSync to have virtual IP and to expose only the IP of the virtual machine that is active and that will remain the same. Secondly, we also have a distributed storage based on SEF to ensure that in case of failure of either the virtual machine or the host. We can still move to the passive one without losing data. So what we've done in this case is very simple until now it's just like we shut down one host, see whether the shift to the other host is okay. And also if we shut down one virtual machine, how long does it take to switch to the other one? And now it's okay for us. So now we've seen that for the proof of context, we've checked that the real-time performance were compliant with what we need. We also checked that the architecture we foresee is a good guess. It needs of course to be precise, but the different components seems to be okay. But something is missing. As Lucien said before, how are you going to test the different applications that are hosted in the cluster? Because we said that we are working with IC61-850. So we need at least to provide the platform with some tools that will help integration. So on top of this platform, we are developing tools that are based on the IC61-850 stack and Python. To be able to test different things like is my application that is hosted in a virtual machine able to communicate with the system? And also these testing tools should be able to simulate other applications input and output to be able to run some tests. Thank you for your attention, Lucien, to add something. Yes, thank you for our attention. We have written here the GitHub URL of the CPAS project. We are in process of transferring some code to this project as the launch is very recent. So if you are interested to learn more, please look at this URL and keep yourself posted of the code that will be posted there very soon. Thank you.