 So, what you've heard so far is the report on research. What is next on the agenda is a brief update on policy interactions that we drive at Lydon Foundation Europe. And now, of course, this will include our favourite topic, the Cyber Resilience Act. So, if you're interested in that, then please stay. And I promise to be relatively brief so that you don't have to wait for that cup of coffee for that long. Let me see. We've got 30 minutes for that. So, okay, so this will be an update on some European policy topics that we're interacting with, and before I even dive into it, I would like to point to the back of the room and say, thank you, Kiran and Open Forum Europe, for the coordination work that you're doing here. It's invaluable. Without you, we couldn't do this. Okay, so, first, we look at the Cyber Resilience Act. The first part of this presentation will be an overview. Can I ask for a show of hands? Who of you has read the whole Cyber Resilience Act and know exactly what's in it? Okay, a couple of people in the room. I would like to first give an overview of what the Cyber Resilience Act actually is so that we all know what we're dealing with, and then give an assessment of what we think it means. It's a horizontal regulation. So a regulation that is about the functioning of the EU internal market. Essentially, it means you have to comply with this act to be able to bring products in the European market, and it doesn't matter if your business is located in Europe. As long as you're offering something to European consumers, you have to comply with this act. It has three policy goals. Reduce the vulnerabilities in digital products. Ensure cyber security is maintained through the product's life cycle, and to enable users to make informed decisions when selecting and operating these devices. There's some key provisions in this act that are important to know. So as I said first, everybody who places a digital product in the EU market will be responsible for the obligations about reporting and compliance. These responsibilities entail fixing discovered vulnerabilities, providing software updates, and auditing and certifying the products. Now here comes the first part that is maybe a bit surprising. The responsibilities the three listed in the bullies are borne by those who develop the software, not necessarily by the downstream integrators. That means upstream making a component available already is responsible under the Cyber Resilience Act to, for example, audit these products. Who is affected by that? There are three groups, essentially. Individual developers that develop open source in a non-commercial way are excluded. This is, however, a blurry exclusion because it says unless as long as you only occasionally receive funding through donations, what as soon as you make a real income out of that, it's basically considered commercial activity. So if you are, for example, a sponsor developer who receives a regular monthly payment, that exclusion does not cover you anymore. It speaks explicitly of donations. If you're a not-for-profit organization or foundation developing open source, you'll likely need to comply with the CRA because the description that the CRA contains how commerciality is defined basically does not include open source foundations. If you're a private company, you definitely have to comply with the CRA. Another maybe important point is that the CRA does not distinguish between open source and closed source software. It speaks of software and the only mention of open source is this exclusion for non-commercial open source development for the purpose of research development. Not to hinder scientific research, so not market oriented. The obligations in the CRA, they are staggered in three groups based on risk. They are non-critical products. The assumption is that 90% of all digital products are in this group of non-critical. By the way, digital products mean software and hardware. That's another distinction that the CRA does not make. So it speaks of digital products and includes software hardware and the combination of the two. For non-critical products, the expected procedure is self-assessment of compliance with the CRA. So you can basically self-declare. For low-risk critical products, for example web browsers, you can either apply an existing standard or go for a third party assessment. Now, the problem with the existing standards is that they don't exist. But that's okay because usually legislation like this comes with a standardization request to the European standards development organizations that are not known to be super knowledgeable about regulating the software ecosystem. So in any case, so these standards don't exist yet. So this means you probably have to go for third party assessment. And then we get to high-risk critical products. There you have mandatory third party assessment. And currently, at least in the current draft, the list of high-risk critical products includes operating systems for servers, desktops and mobile devices, hypervisors, and container runtimes. This is clearly open-source critical software infrastructure. This is not something primarily released by industry. So very clearly here, the obligations under the CRA target large-scale open-source projects. How should vulnerabilities be handled? First, you need to provide a risk assessment. You need to ensure that the product is delivered without any known exploitable vulnerabilities. At the time of shipping, you cannot have any known exploitable vulnerabilities. I'm looking at Greg over there who will say, I can never ship a product. And I'm looking at the other thing. So I'm looking at the other thing. So Linux kernel, because we know the vulnerabilities in there that we cannot fix. By you. Anyway, so this is one of the quirks. We need to ship software, so digital products, secure by default, minimize data processing to limit the attack surfaces, and we need to provide security updates along the lifecycle of the product. Many of these things we would clearly support if the customer is handling. If you find a vulnerability, you need to fix it without delay, perform regular updates and security reviews, disclose exploitable vulnerabilities, and provide vulnerability patches to users. Now, keep in mind that all sounds great if you're a business, but as I said already, this can be the responsibility of an upstream foundation releasing a container runtime. Yeah, and even for a business, it's vague. But this is also vague because I'm just giving you an example. And vulnerability reporting, actively exploited vulnerabilities need to be reported to the European Union Agency for Cybersecurity within 24 hours. So, one day, you have documentation obligations, a description of the design, development and vulnerability handling process, and assessment of cybersecurity risks. You need to say which harmonized EU cybersecurity standards the product meets. Mind you, they don't exist yet, but that will come. And a signed declaration of conformity, that these things have been met, and a software bill of materials documenting vulnerabilities and components in the product. Last bullet is a bit unclear because it's required for part of the products, but not for all. But let's expect that long-term three months, three years from now, software bills of material will be required. So everything I gave you so far, besides my snarky undertone, was just the summary of what is written in the text. I would now like to give a brief assessment to try to explain why you were worried about this, if it's not clear yet. There are many concerns from all sorts of stakeholders. There is this exclusion of non-commercial open source development from coverage in the CRA, and I think if there's one topic that we've spent hours of group discussions on how to improve this wording, it's that. There are product lifecycle requirements. Since they don't distinguish between open source and proprietary software, there could be a requirement that if we make a stable software release, that we have to maintain this for, I think, the assumed period is five years. Now, the open source development model is more, we make the next stable version that automatically deprecates the previous one. So we wouldn't maintain the old version for five years if we have a better new software. Record keeping requirements might be difficult for open source communities and the vulnerability disclosure requirements, they throw up a lot of worries, for example, what if we need to disclose something before a fix is ready? Or what if that disclosure comes into the wrong hands and therefore can be exploited, the security vulnerability can be exploited even wider and we can't fix it? But, yeah, in the end, for me personally, the key problem with this series is it does not distinguish between software development for open source and contributing to upstream and bringing a product into the market commercially along a normal supply chain. If you want to get more details on this, it's in this long blog post that I wrote last week, but that is really the key issue because of this deviation from well established norms, it can be that upstream open source communities are now responsible for security vulnerabilities introduced by commercial projects downstream, which is weird. There are two essential misconceptions in the CRA that we were not able to clarify with policy on lawmakers even though we tried to explain it. One is there's an assumption that the developers who know best about the software and its vulnerabilities and how to fix it are in the projects. And we tried to explain that upstream means that people contribute there, but that the developers are sitting downstream somewhere at all these commercial companies building products. They fix things as they build products and contribute back upstream, but that doesn't mean that upstream has developers. Yeah. And the other misunderstanding I would say is that open source foundations are really large, well funded funds for big tech. That might in cases and individual cases be true, but I gave this example of the Linux Foundation, if you divide the budget of the Linux Foundation, which is admittedly not small, by the number of projects we have, we can pay for not even two FTE per project. So certainly we cannot pay for security staff doing regular code reviews and security reviews on all the projects. So we are suggesting that besides the well-developed amendment proposals that the group organized by Open Forum Europe has developed with us, we are just suggesting that two things should happen with the CRA to make it work. One is that the responsibilities and obligations that it imposes must clearly be aligned with the structure of the supply chain, which means anybody who brings a product into the market is responsible for that product's attributes, not for less, but also not for more. And you cannot point to upstream and say, I consumed insecure upstream software, it's their problem if you introduce a product into the market. That's the first demand. And the second one is that the commercial entity placing a product must bear all the corresponding responsibilities. However, these are the demands. They're heard in Brussels, but they're not yet, they haven't led to changes in the text. Where does it currently stand? This is with the red arrow points to the trilogues. This is the next step. So trilogues mean the European Commission, the Parliament and the European Council, the member state representation are getting together and are hashing out the next version of the text in a three-way process. And we have briefed all three groups intensively, repeatedly and in detail. And we hope for good results. So, and by the way, the trilogues are supposed to begin late this month or early next month and then it goes to a plenary vote if there's a successful outcome of that. What do we at LF Europe do? We continue in this ongoing collaboration with the CRA Commission. We have an open forum Europe coordinates. Again, I can express how grateful I am for that work. We have a call to action on Linux foundation Europe web page. This has been updated last week, so if you read it three months ago, you can now re-read it again. There are two blog posts linked on it that one explains what does the CRA contain in detail that open source developers need to know. The first one is the potential policy implications. We will continue submitting our opinions, offering our advice to the trilogues phase. And on Thursday this week, as part of the open source leadership summit, there will be a panel discussion on the CRA where we have representatives from the kernel community, from somebody who provides a development platform, GitHub and others to say how this is going to work. So, we will continue to talk about this. Question. I can't elaborate on that much. Kieran could maybe do it more, but essentially for me, what I know is that the trilogues result in a new version of the text that all three branches do agree on, and that will be submitted to a plenary vote, and when this happens, I don't know. How the process happens, I don't know. Okay. If there are any questions, I suggest there's a coffee break coming up after this. We can meet in the hallway and discuss there. I would like to give a great brief rundown of the EU AI act, which is the other one that is raising a lot of interest. The AI act is a proposal for the regulation of the European market. The idea here is that AI systems, again available in the European market are safe to use by consumers and respect European law. It also aims to facilitate investment innovation in the AI, however, with strong governmental oversight. And this is a bit of a tight road walk, because you're trying to make the innovation possible. I think it's a bit of a challenge. Well, at the same time, protecting consumers from potential harm. The AI act applies a risk based approach based on application. So it doesn't look at or doesn't say what the technology looks like, it says what is it going to be used for. So the AI act is very important in human rights. Limited risks brings transparency obligations, that means you need to disclose details about what data you collect, et cetera. An example would be a chatbot. High-risk applications require an ex-ante, that means before it's made available, conformity assessment. An example here would be an example of how potentially harmful the use gets, the stronger the obligations get. And then there's an acceptable risk. The use of these systems will simply be restricted and not allowed. Examples, systems that exploit vulnerable groups. Or systems that can be used to facilitate the violation of human rights, et cetera. These systems will be able to offer disaster system in the market or to operate it. Enforcement here is done by the member states. The EU will set up an oversight board, the artificial intelligence board. And for high-risk systems, the responsibility for compliance will be with the member states of the European Union. And this may require access to an oversight board if you are in the high-risk group. And this may actually result in corrective measures may include fines, but also restrictions of use or the obligation to withdraw the system from the market. Yeah. So, what are the stakeholder concerns in this case? One is that the definitions are too broad, so somebody argued that the way the European Union describes what artificial intelligence is already is covered by sorting algorithms. Some groups say there is regulatory overreach, like it's too broad. Others, however, say that there is underreach and that it's not covering everything it should cover. So this is a good political debate to have. So civil rights groups call for wider prohibition and I think it's more business groups that say you're overreaching into technical flaws, like emotion recognition is not very well described and so on. So you see that there are plenty of concerns. One is, again, similar to the CRA, that by referring to existing standards you are actually delegating regulatory power to standards development organizations because it's difficult to argue against them if they promulgate the standard and people comply with them that maybe that wasn't the right way to do it. And yeah, there's a clear lack of individual enforcement, right, so as citizens it would be difficult for you to enforce. Legislative process here. Trilox also supposed to start in September. There's input from the council that would like to narrow the AI definition but there's also a suggestion and this worries many people in the open source community that you can cover general purpose and foundational models. Now, general purpose models are models that are generally trained on, for example, image recognition but don't yet have a specific purpose and those are the typical playing grounds of research and development, academia and open source. You develop something that is useful for potential class of use cases but it's not market ready yet and with this proposal, the AI act would already cover such like systems that are still experimental or in development. Yeah, so that was my overview of the AI act. We are before the trilogues. These are the responsible people. I picked the two, CAI and AI act to tell you that in the European Union we're developing strong regulation that will affect the tech sector and the European Union market represents one third of the global market. The assumption is that all this regulation will have a global effect similar to GDPR a couple of years ago. We talked about the CAI, we talked about the AI act, there's the product liability directive that will be updated, the regulation on standard essential patents and data act and they all affect what we do. So that's to explain to you why I spend time in Brussels and we have to deal with the AI. We're starting tomorrow. You can actually get stickers hopefully when Susan is back from the printers. I look like this. Fix the COA. There's a Twitter hashtag. And you can please if you are an individual developer this will create a tweet for you where you tag the European parliament and say please fix this. You can use this one. There are some responses and I really picked European industry responses here, not the open source community. So for example, Nenner Labs says the COA could create a series of unintended adverse consequences to the security and stability of open source systems. Faudia asked the German automotive association clearly a European industry association not known to be primarily a software source code developer. They are, but it's not their primary business. Yeah, and they say interestingly very clearly that the cyber security obligations should apply to the companies that bring fast to market and use it commercially and not to the developers who make fast source code available for your charge. So very much what we say is supported by European industry. Okay, I leave this here. Q&A will be in the break over coffee today. Thank you very much. Oh, and here is the link, links to the blog post. Thank you.