 This discussion will be, it's important, it's timely, you've heard about the status of the COA and the concerns that we have and today what we've tried to achieve with this panel is to put together a really broad representation of industry and community actors from very different perspectives and in the introduction you will hear that. So the idea that I would like to communicate to you is that the concerns that we are raising about the COA are not an individual small constituency concern. They are concerns that are shared by a broad coalition in the industry and in the community and so far I can say I have not heard anybody really disagree with the concerns that we're raising. So with that I'll start from the left please could you introduce yourself and also say how your work relates to the... I'm Cher, I am the director and fellow of the Python Software Foundation. So we have like 400,000 Python packages on our Python package index. So yeah, it will deeply affect us if anything happens. Hi, I'm Justin Colomino. I'm on the board of OSI. I work at Microsoft and I work in policy at GitHub and GitHub obviously also has many... I'm a chain operations team and my team is specifically focused on auditing for compliance and doing security analysis as well as security engineering work. So I'm very interested in CRA and how it might impact how we secure our supply and understand compliance better as a global company. I will do this. Thank you Justin. Hi, I'm Phil Rob. I work for Ericsson Software Technology. Ericsson is a Swedish manufacturer of lots of different types of telco hardware and software. I represent the open-source arm of Ericsson and the active work we do upstream. Many if not all of the products that Ericsson sells have some level of reliance on either Linux or the Kubernetes ecosystem in CNCF as well as many other projects. So what ends up happening with the CRA is very impactful to Ericsson and our customers and quite frankly to the vast majority of companies inside of Europe. Thank you. So what I will do now is give a very brief overview of the status of the CRA and what we're thinking about it so that everybody's on the same page. This is really just to set the scene for the panel discussion. So the Cyber Resilience Act has three policy objectives I think that we all share and this is one of the key issues. Nobody disagrees with what the Cyber Resilience Act is trying to achieve. We would like to reduce vulnerabilities in digital products and ensure cyber security is maintained throughout the product's lifecycle and enable users to make informed decisions when they're choosing products basically. It's a horizontal regulation that means that it regulates access to the European Union market and that means it doesn't matter if your business is located outside of the EU or inside. If you're offering products or make digital products available as the CRA says it affects you. The key provisions of the CRA are that everybody who places digital products in the EU market will be responsible for the obligations around reporting and compliance. This includes fixing discovered vulnerabilities, providing software updates and auditing and certifying the products. And one interesting quirk of the CRA which really has us worried is that these responsibilities will be borne by those who make software available which means in the open source case the developers that make open source releases because you are making software available when you do that. Who is affected by the CRA? Here we have to give a disclaimer. This is based on the draft from September 22 which is still the draft that's officially being discussed and amendments have been filed from the European Council, the European Parliament. So this might change. But the way this stands as of now is that individual developers who directly as individuals contribute to open source projects are not affected if they do this in a non-commercial setting. Occasional donations are fine, so tip jar is okay, but for example being regularly sponsored, for example for repeating payment, can be already constituted as commercial activity. Not for profit organizations, non-profit organizations developing open source are not excluded from the CRA. So even open source foundations, even if they're charities are obliged to fulfill the requirements. And private companies who are releasing products are covered, no exception. One particular issue is that the CRA does in the text of the articles not distinguish between open source and proprietary software. It mentions open source in an exception at the beginning in a recital that says that to not inhibit scientific research, basically, non-commercial open source development should not be covered by the CRA. But this is such a qualified exclusion that it doesn't cover 90% of the activity of what you're all are doing because we're assuming that most developers contribute in your day job. And that means that's commercial activity. The obligations under the CRA are staggered based on risk, also something we would totally support. Non-critical products are assumed to make 90% of the market, make up 90% of the market, and it's enough to make, to provide a self-assessment of the compliance of the product. An example here would be a smart home lamp, something that may not work, but if it doesn't work, will not pose a security risk to the rest of the world. Then there are critical products and they are separated into low risk and high risk critical products. Low risk critical product could be a web browser. Here you can either provide a standards-based assessment or an external assessment by a third party. The problem with the standard-based assessment is that the standards still need to be developed. High risk critical products require mandatory third party assessments. Now, and here gets interesting because key open source foundational technologies like operating systems or hypervisors or container runtimes are explicitly listed as high risk critical products. Carve yet again here, there are amendments proposed to fix this already, but they need to be adopted in the trial also. And it's clear that for not-for-profit organizations, the required third party assessment would be a major burden. Vulnerability handling, everybody will be required, if you're covered by the CRE, you'll be required to perform cyber security risk assessment and ensure that the product is delivered without any known vulnerable things. You will hear about that later. I mean, reasonable request is security by default, like secure settings should be the default, data processing should be minimized to limited hacks and security updates need to be provided. You need to be able to fix vulnerabilities without delay, perform regular tests and security reviews, disclose exploited vulnerabilities and provide vulnerability patches to users. The reporting is assumed to work through reporting to the European Union Agency for Cyber Security within 24 hours in case of active exploitation. There are documentation rules, obligations, documentation must include a description of the design development and vulnerability handling process, which in the case of open source may be difficult, because you might not control the design process. Our design is sometimes trial and error or release early, release often. And then outcome is good, but the process might be chaotic. You need to assess cyber security risk and procurement the security standards, which don't yet exist, that the product meets. You need to provide a signed declaration of conformity and a software build materials in most cases. I'm making jokes about the standards not existing, but there will be a 36 month implementation period, and in that time we assume that the standards will be there. So what does this all mean? Everything I gave you until now is mostly facts, like I'm not sure what I'm talking about. I'm just going to go back to the one I mentioned earlier, and I'll just give you a little snarky jokes on the side. I tried to just describe the CUA. There are many stakeholder concerns that have been raised. The Recital 10, or this is one of the forward articles, is a recital, before the real articles of the law begin, has this exclusion of noncommercial open source development, where we really believe this is not hitting the level, and we have probably 1,000 different proposed amendments that try to fix this. However, even the product lifecycle requirements, the record keeping requirements, and the vulnerability disclosure requirements raise concerns. The lifecycle requirements say that for the minimum lifecycle of a product, it needs to be maintained and secure. Now the open source development process mainly says, you know, we release an update to the software, that's the new stable version, and we ditch all the other ones. We don't maintain them for five years. Record keeping, difficult, maybe if you're a small community that's not even incorporated. Disclosure requirements may, there are concerns like, what if we're required to publish a vulnerability if no fix is available yet? We have 24 hours. This is also a matter of trust. Like, do we trust the NISA to handle this information responsibly? I think that can be discussed. But the most important concern that everybody has is that there's really no distinction between open source development and bring products into the market. We all, I think here in this room have an understanding that there's a B-directional development process in software. We contribute upstream, that's one direction, and from there we're building products for consumers, that's the other direction. And these two processes follow different rules, and the CRI is basically treating them the same. There are two underlying misconceptions, we think. One is that the developers who know the code best and are best suited to fix these vulnerabilities are located upstream, where we say that they're contributing upstream, but they're located downstream in the company. So the upstream communities are not in the best position to fix issues. And then of course that open source foundations are a lot, well, funded funds for big tech. Yeah. So what do we think how this can be fixed? We think that the obligations need to be aligned with the commercial activity along the supply chain, which excludes not-for-profit open source communities, and that the entity that places the product into the market, on the market, must bear the responsibilities under the CRA. Where do we stand with this? The red arrow points to the trilogues that's the next phase. Trilogues means a three-directional negotiation, if you will, between the European Parliament, Council, and Commission. And that's supposed to produce the final text of the CRA. Then we go to Plenipot eventually. What are we doing about this? What are we doing about this at LF Europe? We're doing this. Panel discussions. We published blog posts. We work with the CRA Task Force that open forum Europe coordinates to submit really well thought out proposals to fix the CRA to Brussels. And we have a hashtag fix the CRA on Twitter that we would like you all to support and share. And we will continue to do that. Here's the link to our campaign. And now we're working with the panel discussion. I'll give you a second to take pictures if you like. So thank you. This was just an overview. We'll have a couple of questions prepared for our panelists. And if we have time at the end, we may open the floor for questions from you. And if there's not time at the end, then I will really also encourage you to pester the panelists after that at lunch. To the panel discussion. Thank you, panelists for joining. Let's start in your domain. How do you think the CRA will improve cyber security? And where do you expect problems? So first off, this whole goal is great. We're very happy about this from a kernel point of view. One thing that will help with is we know companies sit on known security bugs for months, if not years. So forcing people who have devices that have reported issues to tell the upstream community about them in a timely manner is great. I would love to see that. That being said, the rules behind that like 24 hours to notice and all this other stuff is pretty crazy because it doesn't work. Sometimes bugs take months to fix. Sometimes, and the biggest issue is with open source software, we don't dictate use. We provide source code, and then you decide how you're going to use it and implement it in your product. So for me, for the Linux kernel, Linux is in a keyboard. It's also in a satellite. It's in a wind turbine. It's in a router. I don't know what your use case is, so I don't know if a bug fixes a vulnerability for you or not. So I don't know that. So because of that, the Linux kernel security team, our goal is to fix all bugs, push them out and go on from there. We don't determine if something is a vulnerability or not. That's up to you as a device manufacturer or implementer to determine if you want to or not. So pushing this to force me to know what your use case is is going to be just an impossible task. That's just not going to even work out at all. So something's good. Something's hard. Yeah, so for the CRA, I think the main problem is that it will put the liability into the wrong person or organization, because I assume all of you already know that in a lot of open source projects, it's actually maintained by a few individuals that they may not be full-time maintaining the project, right? So that's why I think that would actually all these very strict restrictions can cause a problem. Of course, the Python, you know, ecosystem and Python community, we're doing the right thing and thanks to OpenSSF and Alpha Omega, now we have some, we can hire some people to help, you know, in Python itself and the PyPI. But still, we still, you know, if you want the whole community and all the maintainers to be responsible, I think that's really very, you know, at this stage it's not a good thing to put them into the person who is responsible for that. I'd echo some of the previous comments about, you know, this being a good thing in terms of let's improve security everywhere, right? Like, that's critical to all of our work as technologists. That said, from the perspective of platform, I think there are two main issues. One is where do you put the liability? Do you put it on people who are writing code and putting it out for anybody to use and morph into whatever product that they're making? Or do you put it on the people who are making the actual products, making money off those products? I think the liability should be put on the product. And the second point is if you have platforms like PyPI, NPM, GitHub, Nougat, when those platforms are distributing software, should they be the ones who are verifying the security of it? Or should it be the people who are taking that software, you know, at no cost and building it into the products they do? And again, I think it needs to be on the assembly and deployment of the software rather than on the development. I also agree with the positives of looking at the in-user perspective, like making sure that customers are using secure products. And one thing that jumped out when I read it was the specifically the secure configurations by default. Because configurations, misconfigurations are a nightmare to try to keep track of. And also having that by default would solve a lot of problems. And also, I like that the CRA has been a catalyst for opening these discussions, open source discussions between foundations, commercial software producers, as well as bringing the public sector together to the table to talk about bringing that regulatory bar to the standards that meets everybody's needs, both on the producer side and the consumer side. So that's a positive thing. I also identified a lot of need for education and training on what the open source ecosystem is and all the different, the various roles that are handed off from, especially within the supply chain, from the producer all the way to to the consumer. Because there's a lot in between that is not considered, at least from a public sector, might have a limited view of that. And also, from some, there's definitely some issues, some expected problems from a security standpoint. That's my background. I focus on the vulnerability disclosure piece. So if vulnerability is disclosed and it's, while it's being actively exploited, it takes away from the the scarce and valuable resources that a company will have to dedicate it to incident response. It also, there's contractual obligations as well. I don't know how it would impact that. And also, like, what happens to that information if it gets compromised or leaked prematurely because it's reported to, I think you said Anisa. So it might not be on purpose. Usually it's not, but those kind of things haven't been considered and I think they're definitely worth considering. And then the second problem I noticed was the ambiguity and inconsistency on how they define roles and responsibilities. I don't know, I'll maybe make this little interactive. Raise your hand if you have ever heard a software developer say to a security engineer, I don't do security. That's your problem. And I make, I make the company money and you're just a cost, a cost center. So I'm going to go do my focus on my functionality and, you know, you focus on the security. So I saw one hand, but I think maybe there's more. So that's also a problem because from when they use terms like developer, like what, how is that being defined? If you want the developer to do cybersecurity risk assessments, kind of what's the point if you're asking them to treat all vulnerabilities as actively exploited vulnerabilities being reported regardless of criticality or the risk that imposes. So those are just a few points. I've got more, but I'll pass it on. Yes, long conversation. I will make the panel unanimous and that I think there's lots of wonderful intent in this, in this regulation. I think they've tried to do in the time that they had and the understanding that they've had, do the best they can with regard to improving security. I think starting with log for shell, log for J, that kind of woke the world up as to both the fact that those vulnerabilities and those types of vulnerabilities can exist as well as the ubiquity, which is more important for me of how open source is these days in our products. Across the industry, you hear numbers anywhere from 80 to 90 percent of any given set of lines of code and a given product is actually open source. When you think about Linux as the base operating system and all of the cloud native work that we're doing, I can certainly think that that stands relatively true in my industry as to how we use this software. So improving things is important given the number of open source projects and the variability of how they are instituted, governed and the number of people responsible for them is so incredibly varied. Having some baseline is really important. So the intent behind this regulation, I think, is again very important. But it does go back to those that are actually receiving the benefit of income and revenue from the products where they're leveraging that open source should be the ones that are responsible for it. And certainly Ericsson is ready to take that responsibility with our products and do that. Where this gets difficult is again with either the lack of clarity with the terms or in some cases the outright acknowledgement that the best people to fix this are the folks in upstream so they're the ones that should be responsible. And you've heard a variety of reasons why that doesn't really work because the purpose and where this software is deployed really matters a lot in how exploitable such a vulnerability would be as well as many of the folks that are doing this work aren't paid to do that and to put yet another expectation on them is pretty much going to be a non-starter. So as we've gone through this cycle and we're now literally down to the 11th hour as the wording that's going to go into this is fundamentally in play in this trilog phase and it's pretty much a rubber stamp of vote once that terminology is made by that trilog commission or that trilog process we are now down to that point. And if wording comes out where the open source ecosystem or those companies that participate in that open source ecosystem feel as though they are now not able to continue with that activity because the lawyers tell them there's just too much risk then we have a problem with what that consequence looks like. So for me in this panel I mean this is really the time to do a call to arms we're kind of at the 11th hour here. Thank you. I would actually like to dig a bit deeper into this issue of how would the open source community and the companies involved adapt to the theory if it becomes the law as we are currently discussing it. So for example how would it influence the way people contribute upstream? How would it influence people consume the ways people consume open source software and how would they interact with established practices to manage cybersecurity. Great. Well the first thing is people would stop contributing upstream. I mean that's a simple 80 percent of Linux contributors by quantity of number of developers for the past 20 years are paid to do this work. So any company that has any influence or any standing in Europe would just stop contributing or Linux would just not be able to be used in Europe. I mean it's one of those things if you can't use this stuff in your products you just won't use it. So no open source would be available even for them to use. So then you lose all the contributors from the European Union as well because we have a huge contingency of contributors. I mean it would just stop development. Yeah I think it's like yeah I think it's overall that we don't have enough maintainer and like contributors already because like for example in Python a lot of users they are not engineers so even if they want to contribute will they be able to kind of fulfill that security requirement that they need to do to you know to get the help. So that would kind of present us some challenge that we have and also for companies they may be discouraged that like for this like for example the library that used by the data science team do they want to be responsible for that if they you know kind of contribute upstream then will they get responsible for that. So that may actually discourage internally like you know before if someone contribute to fix a bug that they found maybe it's okay and now maybe there will be internal memo saying like please don't do that because the legal team advises not to do that so yeah. Right and I think the the goal was laudable right like well if we put the liability on upstream then more people will contribute upstream I think that was the that was the idea that the legislature probably had but what's really going to happen I completely agree is like you know the legal department is going to be oh if we do upstream that that's risky. Like instead like let's let's keep you know our house in order and whenever we send something out like you know that's kind of that that is where we need to put all our energy into compliance and if we go upstream like that that's much more risky because all of a sudden you know we're liable if we're you know contributing you know upstream and we're and it's and it's found to be a commercial activity and so instead of saying oh like you know let's let's go upstream I think what you're going to see is you're going to see a cut off at the product level the deployment level of where where companies are going to focus rather than at the development level you know upstream in the open source project. I'd say it definitely blurs the lines especially for Red Hat especially since we have a long standing approach to to working in the upstream collaboratively we have our upstream first development model where we we work with partner with with customers and open source communities and and our partners to to fix bug fix bugs and patch security vulnerabilities in the upstream and so I can see that it definitely blur the lines we contribute to and source from over a million different product projects across the you know multinational so how do you make that distinction from like an EU contribution versus someone who's not in the EU I think that will it will lessen contributions from from EU countries it will also lessen resiliency because it won't be so easy to push fixes upstream not knowing what the implications are to that. And for me it's you know I think it has always been the case that because open source comes with an as is clause across all of the different license types it's really been left up to those that use that software as to how they're going to support it and it can range from the spectrum of as is to you know a full life cycle guaranteed SLA and so forth that for example Erickson puts on on the products where we're using open source right so that's been variable and I can certainly understand you know an attempt to put regulation on that to say there's a baseline you know these are the things you need to do and in that attempt in that that activity that makes sense to me I also think it's the case that when you do that the natural evolution of companies being put in that set of requirements you'll naturally start to have more and better activity going upstream right I mean without any regulation we have the open SSF we have a variety of activities with salsa and all of the different projects that have been going on to improve security in upstream so that it's easier to consume more secure software for the incorporation in the products for those that are creating these commercial products and services based on that so we have that natural evolution I expect that that would continue further with regulation that focused on those that were making money from these products but what scares me is when Greg who I've known for more than 20 years I find him to be a tremendously pragmatic and practical guy as well as Justin who you know Justin and the lawyers like him who spend their career understanding open source obligations and requirements and fulfilling those they tell me that if this passes with the wording that's there that likelihood of what you saw on Gabriel's slide yesterday morning that you know either it's geofence blocked from an IP address standpoint of you can't get to this software at the Linux Foundation or wherever because you happen to be trying to access it from Europe or there's a banner across the top of the repo that says not fit for use in Europe what does that do with anybody who is in this room and is working in software what does that do to your software supply chain and that's where we are that I find is a something that we have to react to. Yeah, thank you. This will also put us nicely on the list of countries that I excluded from down was like North Korea. I asked like one aspect of the question was how does the Sierra intact with established practices to manage cybersecurity and you covered the first part how would it impact contributions but could we look a little bit into how compatible it is or challenging for the current ways of managing vulnerabilities maybe Greg or Laura could go into this a bit. Well the EU has a good they want to try and duplicate what like the US does for their centralized reporting vulnerability database. I've given many talks on how that does not work. So it is interesting to hopefully they won't make the same mistakes that the US has made there. But that being said there needs to be some form of insurance that devices are secure and safe over time and this is a great thing this is a very good step forward requiring known updates requiring devices to be able to be updated. That's a huge step forward and that's something that many of us have been pushing for many many years. So those are good things and these are implemented properly and specified properly will go a huge way to making the EU more safe and devices safer and more secure and whatnot. So if they implement these in a way that does work properly on a product level I'm all for this and I think this would help out immensely and be a good shining beacon for other parts of the world to emulate. I mean that would be wonderful to see. I was just thinking from like an internal perspective for for security analysts and engineers who are working feverishly to to manage the growing number of industry standards and government regulations across the board and trying to keep up with that and also implement good security best practices and come up with solutions all at the same time. It becomes challenging to to know whether these things are are duplicating standards that are already that we're already working on or already attested to. So kind of it's I wouldn't say it's just might be a little disruptive but every time a quote unquote new legislation passes we have to kind of stop what we're doing, pump the brakes on on any critical initiatives and say OK how does this affect us? Are these based on the same core core principles that we're already focused on and do gap analysis and possibly a test in a different format? Like how does this disrupt the flow of what we're currently doing? So I can see a huge benefit in having a more unified international set of standards that I mean because at the core a lot of these things are are hoping to achieve the same things. So I think having more unified approach would be helpful for all of our sanity and the security standpoint. I have one more question from our panelists and then we're probably also already reaching the end of the discussion. So then we'll go into Q&A after sorry for that. So panelists, is it are there topics in the COA or where the topics in the COA where you think that the the results when it is implemented will not match the intention that we also support. Where will the COA misfire basically? The easiest one to show is all the closed source companies are happy for this. Let's put it that way. All the lobbying and they're very happy about this because this cuts out open source from doing everything. I just want the level playing field. Don't make it open closed. Let us compete on a level playing field and they're cutting us out so that we can't even compete. Not there. Yeah, I think, you know, I think, Sierra, the intention is good but the implementation right now is not that good. So the problem is that we are, of course, we want more secure, you know, software for everybody, but if you can't just like flow stick to people, you have to also have the carrot. So like, give people more support, like give the environment for take, it takes time for all the like, you know, all the good practices being used as a kind of a default as a standard by all the maintainers or the project. But it takes time. You can't just like put the Sierra to enforce everybody to comply, you know, tomorrow or something like that. So yeah, I think, yeah, we should be concerned about it. And yeah, and we have to do something about it. So I mean, I think open sources has really won because we've reduced the barrier to sharing, right? Like that, like by sharing software which, you know, zero marginal cost could, once it's written, can be sent everywhere. And by reducing that barrier, we've really sparked innovation. And what's happening with the CRA, the unintended consequence is that it's adding a new barrier to sharing. If you share, you might be liable, right? And Phil mentioned licensing. I'm a lawyer, so I like to talk about this a little bit. You know, the MIT license, the person that no one knows who wrote it. But long ago, when somebody, when the lawyer sat down, they had two things in mind. One was like, how do I write a clause that means that anybody can, I can, my folks at MIT can share this? And second, how do I make sure when they share it that nobody comes to them and says, oh my goodness, like you shared this with me and I used it and it blew up my system like you need to pay me money. And that's why we have that like as is that bold text or the all caps. And so the CRA, all of a sudden is imposing liability for sharing, right? So it really needs to, you need to put the liability not on the development of software. It needs to be on the deployment. People who are actually putting into use, making money off of it and should be responsible for that. And so that's what I worry about that implementation difference across multiple legislations, but in this one in particular, that it really needs to be about deployment not development. I mean, it kind of in the same vein say that when we're looking towards contributions from, whether it's from a security researcher, like our, you know, good white headers, ethical hackers that are giving us tips about where our vulnerabilities are or even to the dubious gray headers that are still sharing that information even if they have other plans in mind or it motivated in other ways. But then also those who want to contribute and their motive, whatever the motivation is to contribute is going to be replaced by a fear of punitive damages. And it's hard to change that narrative once you go down that road, I think. And also when it comes to, I know I talked about exploiting vulnerabilities and things like that, but kind of making a shift over to how we just communicate across borders. I think it's important to keep that conversation going and make sure that we do what's right for the consumers and keeping the consumers in mind. I would just echo what's been said before. I think Justin's comments were somewhat of a mic drop moment. Outside of that, the other unintended consequences, as also has been mentioned, vulnerability without a known fix is generally a bad idea unless the fix is either ridiculously hard or worse yet being ignored. Then, and we have disclosure policies that say that, right? You have so much time and then we're going to disclose. 24 hours is not a realistic time. Okay, thank you. So we're reaching the end of the panel. I will ask you for a little bit more patience because I would like to give the panelists one opportunity to mention something that could be updated or changed in the CA to make it successful. Maximum one minute each. Just level the playing field, don't carve out, don't put the responsibility on the creator's software, make it on the person who's actually using it. Yeah, I think, let's put it short. I think let's make sure that open source project can still be open source project and not be punished by being an open source project. Sorry, apparently I already had my mic dropped but I'll try one more time. No obligations on sharing and development, obligations, regulation on deployment and making money on the software. That's where the line needs to be drawn. I would say transparency and clear roles and responsibilities are really important to help manage expectations. So all good things aside, when you have those in place it helps both security understand what it is that like we're trying to do and that it's not just compliance because compliance does not, I've heard this a lot and I love it, but compliance does not equal guaranteed security. So make that my mic drop statement. And for mine and my perfect world, we adjust so that there isn't any friction for continuing to contribute, that obligation goes on to those that are making money and for any and all fines that actually occur for those commercial companies who aren't complying with the regulation that money gets funneled back up into the upstream to do more security work in the upstream. And with that, I would ask you to give the panelists a hand. Thank you. Thank you. Thank you. Thank you.