 So, good morning, good afternoon, everybody. There is obviously some people remote, so that's why I'm saying that. I'm Elwabai, and welcome to the LF Energy Embedded Submits. I will be UMC today, so that's why I'm here. And I hope we're going to have a lot of interesting talks in the energy transition and about EV charging and how we manage substations and so on. I want, of course, to thank all the committee of the LF Energy Submits, LF Embedded Submits. And, well, I think we're going to start first with Aurelia Watare from RT. And I invite you to join me now. Thank you. So, welcome. Hello. So as I already said, I'm Elwabai from South Africa, and Aurelia Watare from RT. And today we're going to present you how we ensure requirements on the CPAS project, LF Energy project, and especially how we test and do continuous integration in your project. Hello, everybody. So first, just a few words about RT. So RT is the French TSO. It means that our job is to bring the power from generation up to our clients. And we are opated in France. Some words about Saba Felineux. So we are a French and Canadian company. And we are doing business since almost 25 years now. And we are actually expert in open source technologies. And we are doing engineering product in different areas, like medical, robotics, avionics, energy, obviously. We are a member of the LF Energy as a general member. And we are a member of the Yocto project. And we are also, of course, a member of the Linux Foundation. So actually, we met Aurelia and me each other in the ELC in Lyon in 2019. And Aurelia had a project with RT to use virtualization for substations. And we actually really work on that in some other areas like avionics. And we start with a group of concept together. And we made more and more work. And at the end, we actually share that to the C-PASS project of the LF Energy. So let me give you a bit of context. Why we are doing this. As you all know, renewable energy sources, so when generation or solar generation are growing. And as a TSO, our goal is to integrate those new sources into the network. And to facilitate that, we do that through our substations. However, the substations were not made for that. They were more made to integrate nuclear or big generation sources. So they are made to be built for stability and longevity. And with renewable energy sources, we need more and more flexibility. So we need to think about new solutions that can help us to integrate those production more easily. And that's where virtualization arrive. Basically, the old system, in the old system, the hardware and the software are bind together. So it's not very flexible. And like it was done in the telecom sector, we wanted to split, both to have on one side the hardware and on the other side the applications. So we want to mix traditional common system and control with the new use case that we have. And for that, we need virtualization. However, it's not any kind of virtualization. It's virtualization for real-time critical applications. So we need to ensure that the application that will be hosted on CPAS can run in the good condition. So for me, it's an OT convergence project because we want to benefit from what was learned from the IT world. But at the same time, we need to ensure that we keep to have the good performance for the OT application that are critical for the system. So what is CPAS? CPAS, it's an integration project. We do not reinvent the wheel. Basically, we will integrate what already exists, like KVM, OpenVswitch, Ceph, real-time Linux, Pacemaker. And since we need, as I said, there are critical applications, they cannot be stopped. CPAS allows you to build a cluster for high availability. So CPAS is made to run virtual machines that host those critical applications. One question that you could have is, why do we use a virtual machine? Why not container, for example? Because there is a transition. And today, the different vendors will provide you with their own applications. And they have their customized virtual machine with their customized real-time kernel. And they can use different versions of kernel. So using a container to this approach was not very, for the first step, was not very good way to ensure the performance and stability. So what do we need to build a CPAS virtualization? We talk about that. Real-time performance, high availability, of course, time synchronization, because the data that we have for this application are a stream of time stamping data. So we need to be sure that the time is broadcast everywhere. Of course, good network performance. And since there are critical installation cybersecurity. So in the end, with CPAS, you have a Linux distribution, the older configuration tools to make it the way you want. And the most important thing for our sector, the test. Basically, CPAS is about testing, testing, testing to be sure that you can install this kind of new device into your substation or your other. Thank you. So as Aurélien already explained, CPAS is about Linux distributions. And actually, we have two of them, with some pros and some cons. So the first version we made of CPAS was based on Yocto. We are doing Yocto, I'm doing personally Yocto since years, of course, and it was very suitable because it allows you to create your own Linux distributions. And regarding all the performances, cybersecurity, we need to address. It was definitely a good choice to make sure that we can customize everything in the way we want to and achieve the performance you want to have. So because you can have the ability to customize every piece of software, and something that is quite important, and regarding the regulations, it provides some clear information about software bill of materials or SPDX. But the drawback of Yocto, it's of course that it's not easy to play with. You are creating and compiling everything from scratch. That's the reason also we made a version of Debian, which is more, I would say, following IT philosophy, where you are not compiling anything, you are just using pre-compiled packages. And actually we managed to have similar performance results with Debian that we have with Yocto, with pre-compiled packages. So the philosophy there is more to use Infra as a code. And we actually maintain both versions, and we have some customers that are more using Debian and some others that are more using Yocto. But we can speak about that later. So as I already said, I think it's quite important to focus a bit about what is a Linux distribution first. So in any case, to me, a Linux distribution is a set of packages split in user space and kernel space. And actually for each package, you are doing the same in Debian, in Yocto, whatever. You are actually fetching the source code because you are compiling that. You are doing the patches. So it's up to the maintainer, for instance, in Debian to do that. Or we can do that also in Yocto. You configure that and you compile with your configurations and you install that. So it gives you at the end some artifacts like binaries, libraries or configuration. And the sum of that is just your Linux distributions. So the philosophy, it's a bit different. On Yocto, everything can be modified. And on Debian, it's delegated to the maintainer. And for cybersecurity customizations, you have some steps that could be actually done by yourself and some that you cannot interact with because it's up to the maintainer to do that. So we're getting the architecture itself of C pass. As explained, we are targeting critical applications with virtual machines. And actually we have a cluster mode in C pass. The goal of that is to actually ensure that we can run virtual machines even if you've got a defect, for instance, of the hypervisor. So just imagine one hypervisor is shutting down. Electricity failure, it can happen also. And you could actually make sure that the VM could be run on the other side on the other hypervisor. So to do that, we've got a cluster architecture and we actually, not reinventing the wheel, we are using open source technologies. We used Corosink and Pacemaker and Corum that are actually technologies developed by Red Hat and Suze. And that kind of set a heartbeat between machines so that you can know that someone is shutting down and now you can move some applications on the other hypervisor. So to do that, you also need an observer which ensure that you've got a Corum in your cluster. For the virtualization itself, we are using the de facto virtualization technologies in Linux which are KVM and QMU. And because we need to match real-time capabilities, we are using real-time Linux, of course. We are also using Ceph that is developed by Red Hat as well. And Ceph allows you to provide remote networks to work data storage. So this is really efficient for us because it allows us to think the data between hypervisor so that if a VM is shutting down from hypervisor one, you can actually restart that in hypervisor two with exactly the same data context. So once we've got the Linux distributions, we need to configure that. And actually in both Ceph's flavors, we are using a lot in CBER. So as I explained before, in the Yocto project, we can actually do some steps by yourself. So that's the reason why most of the cybersecurity hardening is done at the build time. And on Debian, we are using pre-compiled packages. So we are doing the cybersecurity hardening with Uncivil on the runtime. We have a complex network to configure. So that's why we are using Uncivil to do the network and the cluster configurations at runtime to, for instance, to create substations and to make all the connections between the hypervisor and for both flavors we are using Uncivil. So short word about Uncivil, maybe you don't know that. It's also a technology developed by Red Hat. I'm not sponsored by Red Hat, but they are just doing some good open source software. And it's a very simple SSH-based technology that allows to remote configure a server, a VM, whatever. And it's also very efficient because you can repeat the actions and make sure that you applied all the configurations correctly during the, in production. So something Aurelia mentioned a lot, sorry. For us as a critical application, it's very important to use an efficient testing process. And as an open source project, of course we need to provide this technology to the community so that we ensure that everything coming inside the project actually fulfill all the critical requirements we've got. A bit inside of CPAS. So we are actually an early application project since April 2023. So it gives you a set of requirements, especially on the testing itself. We have the OpenSSF Best Practice Silver batch that is also pushing a lot on testing. We have about 20 repositories and around 2,000 commits. And we've got 700 tests. So what do we test? We've got actually different categories of tests. We are doing system level testing. We want to ensure that the overall system is working well. We are using a lot of tests regarding security because we are running critical applications. And we need of course to ensure that we can run applications dealing with electricity protocol to keep it short and ensure that we are matching the latency required. So we've got application testing. What we can test? I think it's something also important depending on the distributions we're dealing with. On Yocto actually because we are doing everything by yourself with the Yocto project, we are compiling everything from source. We can actually interact with older steps. We can check that we can fetch, patch, configure, build, install any packages. And of course we can check the artifacts. On Debian it's a bit different. We are using pre-compiled packages. So most of the build process is done by Debian itself and it's up to the Debian maintainer to check that he's able to build each package correctly. What we can actually test, it's the configuration itself because most of the time we can actually override it. And we can also check that the binary and the libraries we've got for each package are just there and compiled. But we cannot modify that. So let's speak a bit about the testing implementation. Thank you very much. So as Elois said, there are several parts that need to be tested. The first is the wall platform. And then what is interesting for us at the end is the integration of the full system. So we need to be sure that the platform is safe and then when we put all the application together on it, it's working well and it's stable. So to do the first pass, we use an open source framework called Koukinia, it is a firmware validation framework. And that allows us to launch tests. It's time there is a pull request. And we'll go in detail into that later. And of course we have regular tools like SonarCube and Ansible Lean for Ansible. So let's take a look at the CI workflow. So as we say before, there are two parts in the test. The first one is the platform and then the application and the integration. This CI is about mostly the platform. So basically it's time there is a pull request on GitHub. What we have, we have a laboratory. In this laboratory, we have real high hardware. And each time there is a pull request, we will redeploy the Linux distribution on the cluster. And once it's redeployed, we will use Ansible to redo the whole configuration. So each time there is a pull request, we flash the machine and we start from scratch. And then we will deploy and launch all the tests. So as Elwa said before, there are today around 700 tests. So the coverage is not perfect. The ISD, each time there is a new use case, we add it to this bank of tests and the system gets more and more reliable. And so at the end, we will generate a test report and this report is accessible on the GitHub. And so the whole cycle takes around 20 minutes. So just to give you a short view on the test report, so it's the idea is to have it human readable and it's split into two different parts. So you will find the test regarding the high availability test regarding security and test regarding the network and so on. Elwa, speak about... So let's speak about cybersecurity test. Again, it's important to understand that. Regarding cybersecurity, there is some steps we can interact with for hardening and some that you cannot change. Thon Yocto, you've got the whole build steps and for instance, if you have a CVE, common variable exposure, quite often you need to apply a patch. So those patch in Yocto could be actually applied and you have a way to get all the CVE in your distributions and to make sure that you can apply upstream patches. You can do configurations. So sometimes you can change, for instance, the way you are compiling a package to reduce the amount of features, but also to add some computation hardenings. You can build, you can install and at the end you can make sure that you have your configurations on. On Debian, it's up to the maintainer to provide that to do the build process and of course is doing that for generic purpose. So it will make a default version of this package with a default hardening policy and so on. That actually could not be exactly what you expect and it's a discussion. And you can only as a user interact with the configurations to make sure that the configuration you are using follow your cybersecurity policy. There is a big discussion right now of the regulations from US and also in Europe regarding software build of materials, supply chain for security and so on. It's an open question. I think that this model could differently achieve this policy. Some people tend to say that with Yocto, for instance, because you are doing everything from source code, you can precisely check Sbom and generate hard Sbom document. While in Debian you are dedicating that to the maintainers and I think there is an open question with the Debian community how we can get in Sbom information and make sure that it's reliable. Actually, we just made a talk with my colleague, Matt Trudypre, for OSS submit in Milbao in September and he was accepted two days ago. So we're gonna talk about that in Milbao. Regarding the cybersecurity, you may know that depending on your country, you have different national cybersecurity office and because it's a critical infrastructure RTE, but all the others are very linked to those agency. So in France, the name is ANSI and actually this French cybersecurity office is designing a Linux recommendation document that lists a strong list of requirements we need to follow. And you have also similar document in US, in Czech Republic, I guess. So what we designed and the goal of course of C-PASS is to be an international project. We do have generic C-PASS test ID and we are doing compliance metrics so that we can ensure that for instance, the requirements 11 of the ANSI is actually implemented because this test, this test and this test are passed. So it's something that we can definitely by design provide for other countries and we think that it's very important also to streamline this process in the CI so that we ensure all the time that cybersecurity requirements are fulfilled. Let's talk about the application test. Okay, so we speak about the test for the platform and at the end what we need is to integrate application and to be sure that those applications can run well. And so in this case, I will just focus on the energy sector, it could be different on the sector. What we use in the energy sector is the IC61-850 standard. Basically it's the standard that describe the data model that to exchange data between your application and also the protocol that you will use. And so to be sure that our platform is safe to host this kind of application, we need to validate at least two things. The first one is the cyclic test to ensure that the critical application that basically our protection that protect the network that the cycle is respected. So for that we use a cyclic test. And one of the most important thing is to also be sure that the network performance are good and the latency are good. Because to give you an idea, the application will exchange data and basically they will get a set of new data each 200 microseconds roughly. And we cannot lose packets and the latency cannot be bigger than I would say 500 microseconds because if it's more the data is not relevant anymore. So today the test that we have is that we deploy a VM. We have a packet sender in our VM tools and this packet sender will send packet to the VM and then we can check that the stream is okay to make it simple. And we also have a cyclic test inside the VM. And what we achieve is to, we are confident now that the performance side are enough for this kind of application. And then once we ensure that what we did, we implement real applications that were provided by our vendors that normally sell hardware and software together. They were kind and they provide us with the virtual machine. So what we call virtual ID, basically ID is an intelligent electronic device. It's the device that we used to have in the substation and they provide us with a virtualized version of that. We put that on the CPAP cluster and we have in our laboratory that actually we want to open to other that want to make test. We have what we call hardware in the loop. So basically we have big simulator that can simulate the network and then you see the picture with the tree and basically we can simulate a fault. For example, in this case, it's a tree that will touch the line and then we have to stop the power. And for the virtual ID, it will be like it is in a real environment, in real time. So just to give you an idea, the hardware in the loop cycle is around 10 microsecond. So on the right you see what we did, we generated a fault. And at the top you see the trip of the protection. So we validate that when there is a fault, the virtual ID behave well. So it means it detects the fault, it trips and the tripping time is good. So to go to production with protection, basically what you need to do is instead of playing just one type of fault, you need to play a dozen or type of faults. And then you will run that continuously during a couple of months and see whether there is any jitter, you drop anything. So what's next? The first thing is moving to production. So right now, actually, we deploy C-Pass in one of our substations. And so it will go in production by October. But today we decided, because it's a critical system, to host in C-Pass everything but the protection. So the protection are still a physical device, but all the automation, gateway, everything that is critical but can have not as critical as the protection are inside C-Pass. The second point is as a TSO, our goal is to push the market, but it's not to provide virtualized platform. So that's why we are discussing with third party that will build a support offer on top of this open source project because that's what as a client or as a vendor, that's what we need. The third part is the enhancing factory acceptance test. So like I said, to bring this technology, everybody is working on those technology, but you need to have confidence for critical application. So our goal is to, we have a laboratory and it's to work on making a kind of certification and to discuss with people to have all the tests that give you confidence that you can install that for critical application. And so to that, of course, it's a use case more and more testing and so on. Yeah, thank you. So last point that it's quite important to me, it's, as I already explained, the regulation and this S-Bomb supply chain support. There is definitely, it's one of the hot topic right now and I'm sure that you already hear about that. So the way we want to have it is to, we are an open source project, so we want to follow the rules. So we want to make sure that we are using the right technologies and interacting with the right standard to provide S-Bomb and supply chain support. So thank you. So there is some, this was a kind of short introduction of C-PASS. Actually, you're gonna have a more technical and practical demonstration this afternoon at two o'clock with Florent from RT and Mathieu, my colleague from Sao Paulo, Linux. You have also, and was not mentioned there, another talk at the end of the day by Angéron and Mathieu regarding how we implement actually cyber security tests in both Debian and Yocto. It's quite interesting to see what we can do, what we cannot do. So C-PASS is an open source project. We've got governance. We actually, and it's quite important to mention, it's not only RT and Sao Paulo, Linux. We've got a lot of people in the project involved. We've got GE. We've got Schneider also coming. We've got Allender, that is here. And it's good, that's the power of open source. It's to have people around. So we've got a website where you have the description of the project. We've got the wiki, of course, we've got the GitHub and that's where all the power is and in the code and in the issues and in the discussion you can have. We are quite also responding on the Slack channel. So feel free to come, don't be shy. And what is interesting to me is that the C-PASS project is an application projects for the energy and we see that it's quite important but it's also definitely an IT project with a lot of actually technologies in Linux, in networking, in real time and so on. So it's quite interesting to work on that. That's all, we have time for question now. So don't be shy, thank you. Any question? I'm just curious about how you perform the tests and if you have many HIL rigs, how you have set it up, is it one or do you use lava or something? I know lava, obviously, we're not using lava. We are actually doing the, it depends. On Debian we are using LVM to roll back on the systems and on the Octo we are using SW update to provide and to roll back to the previous versions. And then we are most of the test orchestrations is done by Unseable. So we are just triggering Unseable Kukinya to get the test and we are getting the artifacts and then we are using GitHub to display all the test results. Is it clear? Thank you. Thank you for the nice talk. I have one question. It exists also at 6.15 model for the infrastructure itself. So normally you can also model the riddles which is the infrastructure. Is this also scheduled or not? You're talking about the application or the? No, the integration platform. So if I run more virtual servers infrastructure also in the 6150, you can also have a 6150 model for the server or for the switch, for example, yes. Okay, so yeah, okay, I understand. I would say it depends why. Because for example, I didn't think about that but internally we have a big project for a new digital substation and of course the question was about what do you modelized in 61850? And we found that regarding the IT part, we maintain it to the minimum because we prefer to use existing technology like SNMP or to have two different pipelines and to not reinvent the wheel. Because what we were doing is okay, I'm going to modelize that in 61850. Oh, but it already exists in IT. So it's always, you have to provide the operator with the right information without reinventing the wheel. So it's a trade up. I think we have time for our last question. So please, okay, that's good, we are on time though. Thank you everybody. So I'm the MC as well, so I will move to my new job. So the next talk, it's at 9.50 and it will be Kai from the company Pionix that will discuss open hardware and software for EV Charger, a very exciting talk. And we have short time for a break but then we will welcome Kai. Thank you.