 All right. The microphone is working. Yeah, great. So we can start. Thanks to you for coming to this presentation. I'm Erwabai from Cyberfelinex and Mathieu Dupré from Cyberfelinex as well. We're going to present you a C-PASS, which is an energy project and it's used for critical infrastructures, mainly in the energy sector, and how we deal with software supply chain security. A few words about our company Cyberfelinex. We are a Canadian and European company based in Montreal and rents in France. We are doing industrial products since almost 23 years now. We are doing a lot of different products in the medical devices, robotics, avionics, entertainment and of course the energy sectors. We are a member of different groups under Linux Foundation since more than a decade now, the Yocto project and we are a member also of the LF Energy project. So actually I met personally RTE, which is the French TSO, so they are transmitting the electricity to keep it short and they are the largest ones in Europe. I met them in Open Source Submit in Lyon in 2019 and they were creating the LF Energy Group and they wanted to start a project about substations. So that's a purpose of C-PASS. So C-PASS is an open source solution for substation. Just to explain to you substations, what is it in a few words, it's basically a system to deal with the electricity transportation and distributions into the grid. Without that you cannot manage the power distributions and it's very critical in a way that it manages a grid and if you've got an issue on the grid it will be hosting protection systems. So for instance just imagine you've got a tree falling on the high line, substations will detect that very quickly. We are speaking about midi seconds and would reorganize the grid to keep it working. So C-PASS is an open source project and we actually aggregate tons of open source software. We are not reinventing the wheel and we are using virtualization because we want to have a multi-vendor approach. The idea is to have different companies doing different products working together on the same hypervisual and we believe that the virtualization could help to integrate sub-party solutions that could be actually Linux systems but also non-linux systems for some logging, monitoring and so on. And of course virtualization has got some some weakness regarding the performance and on real time and on network latency but we made some analysis and it works and we can achieve the performance expectations for many four protection systems that are very critical. In terms of cybersecurity the usage of VM allows also to clearly separate different vendors software and to make the overall management of that very easy. And yeah that's it. So just to explain a bit on the project, I'm the TSC chair of the project and Maturi plays the first contributor but there is also other companies involved like RTE, like G and I think there is also using that, Schneider is also coming, there is a lot of different partners involved and that's very nice to see. So Cpass is a Linux distribution. As I said we are not reinventing the wheel, we are just trying to use open source software compatible for the performance expectations, cybersecurity expectations for the energy. So we've got two favourites on Cpass, the first one is based on Yocto and the second one on Debian. On Yocto you can create your own Linux distributions from the source code and you can customize everything by yourself so it's good for cybersecurity and performance approach. While in Debian you are more following the IT philosophy and you are only configuring and not compiling, you are using pre-built packages. So inside of Cpass, like every Linux foundation project, there is different phase and we are one of the early adoption projects and it shows that we've got intensive testing process that matches industrial expectations and we've got the OpenSSF best practice silvers, we are dealing with 20 preceteries, about 2,000 codes and we've got two CI running actually, one in RTE offices and the other one in cyber-frequency offices. We are running a lot of different tests, 700 tests. We are doing tests on targets so it means that we are flashing just mid-range distribution on real hardware and we are doing testing on that for each PR pull requestory. So it means that we can test functional tests to ensure that we do not have any system D-services running, fading, sorry, we can check that we've got our loading system running, we can check some cybersecurity requirements and there is a lot of tests covering that for permissions to check we've got readerly root FS, we've got all these SSH keys with a certain write and so on. So there is a lot of value there and it's very sensitive for the energy sector. And we want to maintain SEPA secure so it means that we want to track security issues, we want to watch upstream fixes. So even we've got a lot of tests, there is of course new CVE coming and the CVE so it's the standard for that, it's common vulnerability exposure. This is an example of the well-known log4g CVE that was very critical but so the idea for us is to ensure that we are maintaining all the CVE. So how we do that? On the Yocto project we actually have an internal tool named CVE check class that could actually connect to a database and the database and ensure that for certain package inversions we have a CVE and we can monitor status patch and patch or ignored. The CVE fixing itself can be done by the community or you can do that by yourself because you are dealing with the source code, you can create or take an upstream patch and apply that for you. The Debian approach is totally different. You are just checking the notifications of CVE with the mailing list so it's not, it's a manual operation, there is no automatic way to get all the CVE and you've got a large amount of CVE to deal with so it's a bit painful to be honest and you can also check the CVE status via the Debian security tracker. The CVE fix itself is delegated to Debian maintainer which could be good but also it means that if you want to resolve that by itself and you are, because it's very critical for your context you could have viable response time and you don't have any contract with Debian maintainer except you are specific contract on so it can be an issue and of course you cannot do that by yourself it would break the whole Debian approach. So current implementation is to have a CVE check on C-Pass Yocto at each build mainly so we can get all the CVE and get just the status but we do have some limitations we don't have the equivalence in Debian and the reports are not very interoperable. For the CVE collections we need to do that at each build so there is no notifications and we cannot manage correctly the CVE. When I'm speaking about management it's to get the status and to manage how to fix a critical one and to ensure that we can mitigate that them. So we want to use a standard way to do that and the standard way is SBARM for software build of materials and we want a tool to manage all the CVE per packages to track them and to ensure that we can mitigate on our system. So that's our expected workflow we already have a CI running we can generate SBARM then we would need to have a vulnerability tracker and we want to check the upstream vulnerability status and the C-Pass vulnerability status against the some vulnerability issues database like NVDE and so on and we want to ensure that we can reduce as much as possible all the vulnerabilities on our systems and it's a workflow that will run every day and as long as it could be. So the first thing we need to choose is which SBARM format to use. Basically there is two major standards SPDX is the first one for software package data exchange it is the Linux Foundation project and actually the 2.2 version of SPDX is standard with ISO standard. The SPDX 2.2 also fits the U.S. SBARM requirement NTEA for instance and the later revolution. The other main format is CICLONDX for CICLONDATA exchange it is a CREADBAT foundation the OWASP foundation. It not only deals with software but it's a full stack of build of material we can also for instance have hardware BAM of your material. Here we summarize the advantage and drawbacks of its standards. We also consider the future SPDX 3 version which is actually in 12th. The main advantage of CICLONDATA is it's full BAM support not only software it also provides an easy and build way to support vulnerability, exploitative information. So how if for example a CVE affect you if you have patch a CVE or if you mitigate the CVE you can describe it in your SBARM. The drawback of the CICLONDATA it is not a standard yet and the main drawback for us it is not supported by the Yocto project. So you can't generate CICLONDX on Yocto. The SPDX 2.2 and 2.3 the SPDX 2.2 is a there is a lot of support and a lot of two and they can't generate SPDX 2.2 or 2.3 SBARM and Yocto project can't generate SPDX 2.3 as well. As CICLONDATA it is possible to to have vulnerability information on the SBARM. It uses external you have to reference the vulnerability as a reference. The drawbacks of the SPDX 2.2 is only software build of material is supported. The next version of SPDX 3 version fixed that and it had a lot of build of material like other build of material. There is also a feature profile feature for instance if you need only to support a license you can only implement the general SBARM profile plus the license profiles. They rework our vulnerability information our display and a link inside the SBARM. There are no built-in keywords. So I don't know about SPDX 3 it is not released yet. It is in break for the moment. So there is no tool which supports SPDX 3 for the moment. So again for us there is tools we can use to convert SPDX to CICLONDATA and CICLONDATA to SPDX. But when you use this tool we can lose some information which can be an issue. But for us it was not an issue. Now we found which format we can use. We will see how to generate SBARM on both C-PASS versions Yokto and Debian. On Yokto it is fully integrated. Yokto can generate at each build on the SPDX 2.2 SBARM. So there is nothing to do to install in the target because it is done on the build side. The SBARM is very complex. It generates the package SBARM but also all the build SBARM for instance all your software you use in the 2-channel. It generates one file per package so it can be an issue because you have to import file by file when you use the third-party tools. So the best way is to merge all the files into a single SBARM file. So to go back we found there is no Yokto do not insert inside the SBARM the common platform in immersion. I mean detail what it is an issue later. And they also add some validity status but it is not very in common way. It is add on the build on part and as command. So it is totally unusable with all 2-channel tools. On the other hand there is no standard way to do it. There is some tools we can use to do that. Basically all the tools we like on the package manager here on the end it is APT. We consider APT to SBARM to do that. It can generate SPDX 2.3 SBARM or second XSBARM. The main drawback it is done when the targets are online. So we have to install something on the target specific. So we have to install APT to SBARM and all the dependencies. And as Yokto this package do not do not contain common platform in immersion and do not contain any validity status. Because it can't insert data which are not inside the APT database. Here we compare the 2 SBARM generated. Basically it is an analyze SBARM because it is done by APT to SBARM after and Yokto it is a build SBARM it is done just after the building. Both SBARM contain all the information, the creation information for instance the time time. They both contain all the names of the package versions of the player. Under Yokto we also have the license. This is out of scope of presentation but it can be also useful. So you can import it in the physiology for instance. On Debian we have the deep file download location. We do not have this on Yokto because it makes no sense on Yokto because we build everything. On both we like CPE. On Debian we have all the files inside the package but only this. On Yokto we also have all the files with all the header and all the files used to compile. And also the file which will be installed. About the dependencies on Debian we have only the dependencies between APT so the dependencies between the package but on Yokto we also have all the build dependencies for instance the GCC version, all the GCC dependencies. Now we can see how we can use this SBARM with Wilner 80 trackers. We consider only open source Wilner 80 trackers here. Before to do that I just introduce how to find a CPE. Basically there is two ways to do that. Use CPE that I mentioned. All CV database can use this platform animation to find a LISBOM in a very accurate way. When you use a CPE there is no full result. The other way to do that it is to do a resets based on the package name and the package version and all the information you can use. The more information you use the more accurate your service will be. So the first issue tracker we analyze is Daggerball. It is initiated by the New York Presbyterian. It is an open source project with MIT license. It is a very new project because the first release was in 2022. The second one, you can see a screen of Daggerball. Here we can see our two CPE versions. Both versions are F-grade because there is a lot of false possibilities with Daggerball. Daggerball can also use the score, the vulnerability score inside CV to give a grade. The next tool is dependency tracks. It is an OWV ASP foundation project. It is the same foundation which made the second date. It is also open source with Apache 2.0 license. It is a more mature project because the first release was in 2013. Here we can see a screen of the dependency tracker. They also provide some grade and score and you can have a graph on it. To summarize and compare the two tools, the main advantage of Daggerball is it supports all kinds of ex-bam, our Yocto Expedix ex-bam and the second date ex-bam. It can use the name and the version to track CV. It is very important for us because we do not have a CPE inside our ex-bam. There is the drawback. We have many false positives. And it is also main drawbacks of Daggerball because it is a new project. They do not implement yet the vulnerability status. So you can't tell to Daggerball, for instance, you have fixed the CV. So they will always support the CV even if we have fixed it. The dependency track on the other end implement this feature. It also supports multiple vulnerability source and external analyzer. And you can also import Vex files, which is a file we can use from the community which describes the vulnerability fix. There is the main drawback of the dependency track. It is only as ex-bam. They remove the Expedix support and it seems that there is no variability to add it back. About the vulnerability tracker. They only use CPE and package URL to find the CPE. So it completely do not work with our ex-bam. And it partially works with the one we generate with Evian. So could you have how respected workflow? The answer is not yet. About the ex-bam, it will be great to have CPE on it and also to have the vulnerability status in a way that dependency tracker and other vulnerability tools can use it. The second issue is about the issue tracker. We do not found a tool on Dolbos that can analyze the ex-bam without CPE and report accurate status. And a tool where we can manage our vulnerability status. But we are optimistic because actually there is a speedy 3 which is coming and we work all the vulnerability status and we know that the UCTO project completely reworked the Expedix generation with the Expedix 3 and will include the vulnerability status on it and maybe CPE. On Debian, we do not have on Soliat, we do not see any change in the incoming unfortunately. So if you know other tools or other news or if you have any questions, don't hesitate. So thank you. We have time for questions. So please feel free if you have some. As I understand it, CPath has the goal to be a platform for other software. How do you see the integration for the users of CPath putting this implementation into practice, getting a sort of a total S-Bam of not just the CPath software but also the software running on top of CPath? Okay, so other software need to also provide S-Bam. So you can be wrong in several ways. There is some company with provide this kind of S-Bam generation just here. And if you use container, there is also some interesting tool to generate S-Bam for some who can use the S-Bam command. Yeah, to add on that, I think that the best would be to use standard all the time. If you want to deal with security, you need to use standard. Otherwise, it's going to be too messy and there is too much information to put it. So that's also the purpose of this presentation is to make sure that we are using standard and to see how we can interpret that together. And that's a pain problem we've got because individually, we've got a piece of the puzzle but we do not have everything working together. So for applications, if you are using the open source way of work, there is a lot of, like you say, like a lot of open source way to get the status. So in SPDX, it's a standard open source way. And for instance, all the projects of the LF energy actually deal with SPDX, custom fields and so on. So you should be also able to get all the cybersecurity information from LF energy projects. So that's the way we need to work with. Thank you. So I wanted to try and answer your question and just say I think one way you could do it is SPDX has a note of external references. So you could generate a new SPDX document which includes a reference to the original document that the open source tooling produced with your additional kind of materials on top. So you would kind of augment or enhance the base SPDX document or the upstream tool built. The alternative way to do it would be like the Yachter project is effectively a build tool for custom Linux distributions. So you could just build your, you know, custom applications on top of or with the existing Yachter tooling and generating your SBOM kind of as part of your build process, which I would argue is the better way to do it. But, you know, opinions are free. So take that with a grain of salt. We have time for a question. Yeah. I don't want to be too eager if somebody else wants to ask a question. But I have another question. I had a lot of conversations these days in an energy context. As this being critical infrastructure, what components need to be in place in the SBOMs? As you showed the various checks you do to be compliant, do you have insight in how you take, carry that over into the SBOM? What does the SBOM have to incorporate and what is maybe less of a requirement and more like a nice to have? I'm not sure I can answer to your question correctly. But the problem we had was that we had a lot of false positive information. And if you want to check vulnerabilities, you need to respect some rules. So first of all, you need to deal with CBE, but also you need to deal with CBE to clearly identify your package. Because if you just say that, I don't know, you've got this VIM versions 6.5 in your context for your embedded devices, you will have different vulnerabilities. So the purpose of CBE is to clearly identify, and it was written there. Here we go. It's not only the package and its version, the package and its version. That's the vendor, the product, but all the language, the target, and with that you can get all the information. So if you have that, you can clearly check your product against CBE because CBE are just done also for some targets with some specific versions and so on. And then, yeah, well, you can have a good status of vulnerabilities on your target. You want to add something? Yeah, no, no, no, no. Do I answer to your question? Well, I was more looking for besides CBEs and identifying those, are there additional requests for compliance that you see can be addressed with ASBOMs? The other way we are dealing for cyber security is not related to CBE and so on. It's quite often you do have national offices that is giving you some recommendations. For instance, for RTE, we've got the ANSI and they are doing many recommendations to implement. And we implement that on CBEs. So for instance, to have specific kernel configurations, even your toolshed, you can also tune that, some root FSB and so on. So this is another issue, but of course, if your system is wide open, you could have a lot of issues, even you do not have so much CBE. So it's both of that that makes your system suitable for critical infrastructure. Yeah, so my conclusion is right now, they live side by side. The ASBOM to know what's in there and the components and the requirements you need for NCE. And also, I just want to add something because it's a big topic. Why we are also speaking about ASBOM? We are speaking about ASBOM because of the regulation. So you may know that the US government, for instance, said that all the critical infrastructures must have full ASBOM and they also mentioned BOM itself. So to identify clearly the hardware you are running with. And the European government is also considering that, of course. So it must be, it's a must have for every critical infrastructure project. And it's a big subject because as I said, you do have a lot of information to deal with. So if you want to ensure that you can mitigate that, you need to follow the standards. So that's why open source is important and you know to be compatible with third parties and so on. Okay. I wanted to ask you about CBE because I'm looking at, I'm not very familiar with it, but I'm looking at the one on the slide here and it says Adobe is the vendor. And if I'm looking at an open source distribution, if I get Vim from Debian, that's very different if I get Vim from Red Hat. So is CBE a useful identifier for open source software? Like does it handle that kind of ambiguity of what constitutes a vendor? Yeah, I'm sure to be able to respond to your question. Yes. Yes. On the other hand, we use APT to SBOM. If you want, there is also more generic tools. And don't remember, as this is 2V, we can deal with APT, DNF, so Red Hat, and also some package manager NPM. Okay, trying to think of how to, maybe this is not a question we can answer here, but ultimately I'm concerned that CBE is not a useful identifier for open source, but I'm not aware of a useful identifier for open source components. So I was wondering what your experience with that was, but it sounds like maybe you're trying to figure out as well perhaps. Yeah, I'm sorry. Is my question making sense, or my train of thought making sense, I guess? I think that's a no. It's okay. Thank you. Okay, thank you. We still have time for our last question. Don't be shy. I'm just curious. We here already heard about SBOM. That's a good start. Okay, so if you have questions, I think you can also join on the project. We are an open source project, so definitely we are very open for discussions. Cyber security, it's a difficult topic, and I tend to say that nobody have all the answers on this topic, so it's good to collaborate each other, and you know all the RF energy convenience, and it's very open and clear, neutral, so come to our TSC, or you can check also on the Slack channel, and don't be shy, and we can collaborate each other and make the world better. Thank you.