 Hi, this is Yoho Sapil Bhartia. We are here at open source summit in Bilboi, Spain. And today we have with us Omkar Rasarathnam, general manager of OpenSSF. Omkar, it's great to have you on the show. First of all, thanks for joining me today. Last time we met at Open Source Summit in Vancouver when you joined. I want to, first of all, things change slowly in the security space, but they also move very fast. So I just want to hear from you from OpenSSF's perspective. How has the security landscape evolved over these like six, seven months? The general thesis behind security and security being, you know, ensuring that the right security principles are present and all the invariants are present. That kind of stuff doesn't change. But what has changed, especially from the perspective of government engagement within the US, has been that there's a lot more focus on collaborating with the community and getting our input in terms of what we believe the right areas of focus are in terms of security. And we find when there's public sector engagement like this, it's our belief that open source is a common good. It's a public good for everyone. And public sector, private sector, and the community are all stewards of ensuring that open source remains secure. So having the benefit of the public sector in assisting us, as well as the private sector when it comes to our members, as well as the community, allows us to deliver on this promise of secure open source software. Now, in terms of some of the areas that have been in the news since we spoke last, one just last week, we convened a meeting in Washington, D.C. with a number of members of the US public sector senior leadership, as well as members of the open SSF community in order to talk about how to secure open source software. We applaud the approach the US government's taking. It's speaking with one voice. The government has issued an RFP around open source security and memory safety. CISA published a roadmap on open source security. So there's a number of initiatives we're seeing that indicate to us that the US government understands the importance of their participation and is engaging the community in the right way. Conversely, I think some of the stuff that we've seen here in Europe associated with CRA has highlighted the necessity for the European Commission to engage us in a similar way. Personally, when I look at the CRA, I don't think it was done with the intent of malice. I think it was done with good intent. But what this highlights is the challenge when lawmakers with good intent maybe don't understand the nuance of the community and should really be in consultation with us or other open source groups in order to understand how they can get the outcomes they want without harming the community in that way. And I say this as a wishful hope. When the European Union decides to revise the CRA, we'll be here to help. We'd love to help. And I think that's another lesson learned. The other area that we've spent some considerable time on is on AI. And quite recently, back in August at Black Hat Def Con, we announced our partnership with DARPA to run the AIXCC or the AI Cyber Challenge. We believe that there's a grand opportunity for artificial intelligence like large language models to be applied to solving entire classes of security problems that we see within open source software. And we're really proud that DARPA asked us to help as the challenge is focused on securing open source software. And what we're really excited about is when the challenge concludes in two years, the solution will be donated back to the community. So areas like that. And we've also started up an AI machine learning work group within open SSF in partnership with LF AI and data. So those are the areas where we've seen a lot of bursts of activity since we spoke last Swapno. You mentioned CRA and also the recent meeting at the White House. The interesting thing is that we are here in Europe and a lot of open source movement happened here in Europe in the grass root label. The kernel came from here. MySQL, MariaDB, it came from here. A lot of the whole large kernel communities also based in Europe. But we are not seeing the adoption of open source even from the corporate side the way we have seen in the US. Also in the public sector, they have done a lot of great job. A unique project when they started using Linux, which did not work out very well back then. But the difference that I see here versus US as bombs and open source, a lot of great work was that they approached it by communicating with the community, getting them involved. Versus here, the decision was made by lawmakers as you said without actually understanding the nuances. Now we have Europe Foundation here as well. What role do you see that Linux Foundation Europe can play in building a bridge between the public sector and open source so that yes, there is a grass root development but we need a lot of work in the public sector and private sector where they understand how open source work before they get involved with it. It's our belief that open source is a public good and the maintenance of open source between public sector, private sector and the community is a shared responsibility. One of the reasons that LF did stand up LF Europe is principally to promote this much better in Europe. There's also, I mean, as similar as we are, there are cultural differences between Europe and the US and we see them manifest in how the governments approach things. As an example, the US was not always engaging the open source community in this way and this is the path that the US has taken. We've seen with many technology issues the European perspective from legislature seems to be that they want to get out in front of things before harm is done and as a result, sometimes I suspect that overzealousness leads to them moving a little quicker through the legislator. Yeah, so I think that may lead to some of it. I think there's an opportunity and we do work very closely with our colleagues within LF Europe in order to better engage with Brussels and we're starting that up right now. From our perspective, this isn't a geographic specific thing, right? From an LF perspective, the world uses open source. Open source appears everywhere from mobile phones to satellites and if we can make it better, whether it be better through security or better through our work in CNCF all these other places, our intent is to make it better. But we do recognize that sometimes engaging on the ground can be nuanced and that's why organizations like LF Europe exist so that we can have those right ties as appropriate to both European members as well as European public sector. And you also mentioned the work with AI space. This is once again, it's a very complicated, very challenging field at this time. Both from security and even to understand what does open source mean in that context of AI. So talk a bit about where we are when it comes to our being a phase where we're still trying to understand or we're gonna say, hey, we understand everything now it's the job to secure those workloads or the environments. So I think there's certain parts of AI that we understand require securing and there's a whole bunch of things to do with AI. I'll focus on the things I know well and I'll highlight the things I don't. I don't have an opinion on intellectual property as it pertains to AI. I know that's something that's currently being discussed in terms of intellectual property ownership and whether something generated by a large language model can actually be copyrighted. I'll leave that for the lawyers to figure out. When it comes to AI, I think of it as a software engineering problem. And there's some principles that we learned are very valuable when it comes to open source security in general that I think we can apply to AI. Now, please understand this doesn't preclude any of the work that the AI workgroup is about to take up. They simply just instantiated in the last couple of weeks so they're still forming their initial work products. But I suspect the areas that I would look at as a security person, one traceability as well as provenance and deterministic outputs. What I mean by that, deterministic outputs, same input should always lead to same output. If you get into a scenario where that's inconsistent, one, it becomes really hard to debug. So if you have a large language model that by argument is producing insecure code, if that keeps changing every time you provide it the same input, it's really difficult to determine how to fix it. Provenance, because it's really important within artificial intelligence to understand not only how the model executes, but where the data that came to make the model weights came from. There's what I believe is a fairly obvious computer science challenge behind how these models are being generated because from a security perspective, somebody that's nefarious could construct a model in such way that it leads to an intentionally malicious output. It could be that you're designing secure software with your LLM and somebody has affected the LLM, so it generates insecure software. That would be very bad. There's a number of infrastructure attacks possible, just as with any other computer. There's prompt injection attacks where you could, through using specially crafted prompts, cause something like an LLM to respond in a way that's undesirable, like for it to execute code locally on the machine through a prompt. These are some of the areas that I would focus on, but I also think there's a lot of benefits. As I was mentioning earlier, the possibility of using LLMs to secure open source software as part of the DARPA challenge I think is amazing and a great use of technology. Another opportunity could be as we start looking at some of the legacy languages that code in, like C. I'm an old C software developer. I will continue to enjoy writing C, but for safety critical and especially areas where memory safety is paramount, like where we're dealing with secrets, network access, user input, gosh, it would really be wonderful to recode that in a memory safe language like Rust. No, that doesn't resolve all the security issues and there's certainly challenges with any refactoring that you could introduce new bugs. But a great application of an LLM might be to convert large swaths of old C into a memory safe language like Rust. When we look at, just let's look at security from the perspective of AI. Does it really also matter that the AI platform, because it was against like models, you know, platforms. So it's very, very, it's just like more complicated than a LAMP stack, actually, that those components should be open source as well because you cannot protect something which is not the code you don't even have access to. So what I'm trying to understand is that how important is it also for AI to be open source as well so that we do have the whole supply chain understanding there. Yeah, so I think the difference when it comes to open sourcing of AI is that you not only need to consider open sourcing of the technology, right? So you talk about the LAMP stack. So one of the LAMP stack, I believe one of the reasons it's so well used with an open source in the early days is Linux, Apache, MySQL, and Perl or PHP depending on your flavor. We're all very well understood, we're all open source. The nuance that comes in when it comes to AI is that you also have to worry about not only the AI models that are being executed, but how those weights were derived, what data was ingested to come up with those weights in order that you can understand what the outputs are going to be. So I think something that the community will need to reason over, and I'm not sure where this is going to land, but as a software engineer it seems obvious to me that if you want to understand the way an AI model works to the same level that you understand Apache or MySQL or PHP, Perl, you would need to understand not only the algorithm behind it but how the model weights were derived off the training data. So it'll be interesting to see how the community approaches that going forward. We talked about, of course, overview of security, we often talk about the secure open source summit that you folks attended in DC. We talked about AI open source. Anything else that you think that we should talk about in terms of open SSF or you think that we touched upon some key points today? We're really excited about the CISA open source security roadmap. So CISA within the U.S. published a roadmap last week, actually while we were at the secure open source summit, the SAW summit, which pledges three main things. One for the... So CISA is accountable for security within critical infrastructure. So first to better understand how open source software is used within critical infrastructure, like power, sewage, banking, technology, et cetera. Second to understand better how open source software is used within the federal government. And third to better partner with the community. On all these three pillars, we strongly support CISA. We think this is the right way to engage and we look forward to helping them on their journey as best we can. Of course, CISA themselves have hired a bunch of very smart technologists and we think there's going to be some great work there. But coming back to it, what excites us most is this idea that since open source software is this public good and does involve public sector, private sector and the community, that we're really seeing the public sector step up and we're hopeful as to the outcomes. Can you also talk about now some upcoming Nilex Foundation events or other events that you will be participating in and what will be the theme there? At the end of October, we have Linux Foundation member summit where we will have our on-site board meeting and then I'll be presenting at member summit as to the progress that we've made with OpenSSF so far. We also have OpenSSF Day in Japan at the beginning of December. So really looking forward to that as well. And of course, coming around in the new year, we'll have a number of events that we'll be co-locating with OSS North America and EU as we always do. And in partnership with some of our public sector partners, we'll be hosting a number of summits to help with the collaboration and coordination between public sector and our communities. Thank you so much for taking time out today and give us an update on where we are when it comes to security. Thanks for all those insights and I would love to chat with you again. Thank you. Thank you so much for having me and look forward to speaking again.