 It's a great honor to be invited to talk on such an interesting issue of the intersection of open source and AI. Maybe people can call that open source AI. Here's Steph in the back. Come on up Steph. Good to see you. Well, so like I mentioned, I'll invite people to introduce themselves in a second. I'm Justin Colonino. I'm here today representing GitHub and their developer policy team that advocates for policy globally to advance software development, in particular open source software development. I also am an associate general counsel at Microsoft. I'm working on open source software law and I'm on the board of the open source initiative, so lots of different hats. I just wanted to make a few remarks. I was really impressed with the morning's panel so far and I really view it as a testament to the success of open source. Really, and I say this kind of ducking a little bit from the Microsoft perspective, but open source one, it's ubiquitous across software development now. The world has come around to the idea that transparency, collaborative improvement, autonomy, and the freedom to use code for any purpose drives innovation and allows us all to learn and build upon what came before. We heard about that in so many different ways this morning. A driving light of that has been the open source definition and the reason and what that definition really embodies is removing the barriers to sharing. 40 years ago when the first, I called it a proto-MIT license, I'm a lawyer so I'm going to devolve into that for just a second. A proto-MIT license was written. Some folks came to an unknown lawyer in the MIT tech transfer office and said, hey, we want to share this code. Any problem with that? And they said, well, how do we do that? And the lawyer wrote a very simple license that said you have broad rights under copyright law and intellectual property laws more generally. And by the way, we need a limitation of liability because we don't know where this code is going to go or how it's going to be used and that's why we see that big bold text at the end of every open source license that says you take this and do what you want with it, but don't come after me. And so the idea was let's remove the barriers and the barriers of liability to sharing. And removing that enables amazing innovation. It permits frictionless zero marginal cost reproduction. Once written it goes everywhere and can help anyone and it drives innovation in Europe and throughout the world. But now that we've won, there's huge responsibility that comes alongside it. People have questions about security. People have questions about product liability when these things go into, when they go everywhere, what's the liability for the people building the products? And we've just had a cycle of regulation in Europe and in other places kind of thinking about what's the right responsibilities, more barriers to sharing to put on to the software industry generally in open source in particular. And I think it's a testament to open forum Europe and the hard work of so many others in the room that we've come out with solutions that don't impose too many new barriers on that sharing and promote that cycle of innovation. And the one thing I want to say is the fact that people are kind of understanding that open innovation cycle I think is a testament itself. It was a dream 10 years ago when this conference was started that people would be talking about open source and open source innovation everywhere and in particular in areas like artificial intelligence and thinking about the benefits that open source brings. And just we've heard a lot about the trillion dollar demand side number, 8.8 trillion. I just want to put that in perspective for one second. That's three times the market cap of Microsoft. And of course, Microsoft also is a zero marginal cost goods company. We write platforms and those platforms are driven everywhere and we sell them and everybody derives value from them. But the fact that open source is valued at three times that amount I think is very interesting. And I think the difference might be the fact that open source has that permissionlessness to it. It's still zero marginal cost innovation. Once it's written it can be sent everywhere. But it's permissionless and it can be anybody can pick it up and derive value from it. So sorry, we're here to talk about AI. What does that mean for AI? What does that mean for AI? I just want to highlight two things about why we've won an AI in open source AI. At the very beginning, most AI developers immediately started putting random open source licenses on their stuff. It was great. Right away and now you can go online and there are ML models available everywhere. About 500,000 on one site called HuggingFace and that number has been doubling every six months. If we come back in a year, I'd expect there to be just shy of two million. And now we're considering how regulation can support and remove barriers to sharing to allow open innovation in AI but also thinking about kind of the security and safety issues that we've talked seen in software that I think people are thinking might be a little bit bigger in the AR space. So with those introductory remarks, we have some really interesting questions and a really experienced panel. I just want, if everybody could go down the line and just introduce yourself there, Mike's in front of you, introduce yourself very briefly, one minute, who you are, where you're from, and then we'll begin with some questions. I'll start from here. Stefano Mofulli, I'm the Executive Director of the Open Source Initiative. I'm Arna Loas, I'm part of the Open Technology Group at IBM. I'm an open standards open source specialist. I've been doing open source since 1990. Hello, my name is Jordanka Invenova. I'm a legal and policy officer in the European Commission in DigiConnect, working on artificial intelligence and in particular, the Artificial Intelligence Act. And I'm very happy to be and replace here my head of unit who unfortunately has an accident and was not able to come, but we are very interested in the debate and want to contribute. I'm Brian Belendorf. I'm Chief AI Strategist for the Linux Foundation, as well as I wear a hat as CTO for the Open Wallet Foundation, another Linux Foundation project, and have also been an open source a long, long time. Hi, I am Mia Petra Kumpula, a natural member of the European Parliament. I am coming from Finland. My engineering diploma is from 1995, and since then I have been doing politics more than techniques. So please, when you go for the technicalities in the questions, I know more in the legislative trials to put the great trials for the future. Excellent. Thanks everybody. So before we get to those really interesting, you know, policy questions, legislative questions, it might be helpful to break down AI a little bit and think about how it's similar to source code and how it might be a little bit different. You know, Steph, you've been doing a deep dive in OSI, as you and I know because we work there. What have you found as you've gone through that process? I, we started thinking about AI because we definitely, we immediately realized that it contains new artifacts. It looks similar, but it's deceivingly similar. We noticed how the pipeline to build an AI system is including new elements, that the, what software is made of is slightly different. And it creates new artifacts that for which we don't have, there is no immediately visible what the legal frameworks are. And I'd be specific. So on one hand, you have, especially machine learning, that requires data on the ingestion front. And when you started to deal with data, all of a sudden you, as a developer, you start getting into, dragged into conversations around privacy or terms of use and other new legislation and legal frameworks that are not, they're not homologated, they're not similar across different parts of the world. And that's a big challenge. Once the data is used in the machine, fed into machine and for training, it generates new artifacts. These new artifacts, model weights, they may have heard the term, these parameters and these new things are the results of this training elaboration. There is no, in the legal communities around the world, there is no immediate natural understanding whether those fall under copyright law or they require new, new laws. So my first impression when I started playing with these new machines, and I went to look at hugging face, I noticed a lot of the licenses that were being used, there are licenses that are being written for with copyright in mind. And my question to my board and other legal experts, I know, was like, oh, how do you feel about that? So when I started sensing these lack of alignment, I asked the wider community, that's the role of the open source initiative, is to be a convener of conversation. So we started asking, what does it mean? And we're driving now this conversation, we've been doing it for over a year, to understand exactly what open source AI means. And we are getting towards a conclusion, hopefully, by the end of this year we're going to have an understanding. The main question that we need to find an answer for is, what is the preferred form to make modifications to an AI system? In other words, how do you change the outputs given the input that you, as a user, want to give? And ultimately, we really want to drive a shared understanding, because we believe that AI systems deserve the same sort of permissionless, frictionless innovation that has driven this immense value that open source software has. Thanks, Steph. And that idea of preferred form of making modifications really has been kind of a touchstone of open source in the past, so that people can pick things up and have that frictionless improvement in innovation cycle that we talked about at the beginning. But maybe zooming way out now to kind of the other side of AI, perhaps. Since generative AI kind of took the world by storm in the past two years or so, there's been an ongoing discussion and debate about perceived dangers and having machines that are able to kind of synthesize and generate text and pictures and video and code and descriptions of chemical, new chemical agents, biological compounds. Your Dhanka, Mia Pietra, as policymakers, how do you consider these, how do you consider this debate as you think about legislation around AI? Yes, first of all, we were prepared. In the parliament, we also had this special group of AI in the digital age that worked for over a year. And we heard like hundreds of specialists to get a glue what we should then later legislate. And the commission was doing its own proposal at the same time. So when we had the proposal, we were even ready. And I was very, I don't even remember everything by heart, but I was very happy when I had the invitation here. So I find the old IDA committee study requested by IDA committee challenge and limits of an open source approach to AI. So I said, oh my God, I have to read this quickly. So the first thing that politicians do not understand, it's not time to regulate something that we don't even have. And it has changed now. Now we are criticized that you are legislating too late. It's too late that today the council should do the AI act and approve it. And also, I think that the USA got bits faster with the executive order that yes, something has to be registered. The risk-based system is agreed globally that it is the good way to look at the for what AI is used. So it's not technology that we try to legislate. It is the use cases and risk-based idea. So I think that the appetite on the political life and more understanding and very much needed public debate came along when it came more publicly that you can have your mobile phone and have AI systems to test a little bit or your 15-year-old is happy that the teacher doesn't know when they have not read the book but only used some language models to make their answer on the book they didn't read but then kind of created own texts. So now it's everywhere. And I think the political response needs to be that it's going to the good direction. It's still some democratic control. There should be some possibilities to see where and how new technologies are used. And then one specific was also that of course the generative AI was not on the commission proposal. Parliament required it. It was the last topic to be accepted also by the council because Parliament didn't want to accept it without. And I don't know technically or even in the articles how successful that is. That is the commission to implement more but then at the same time it cannot be that if you are in charge of something you are doing on the markets that the big players or the generative systems behind many use cases are not responsible at all but then all only the smaller players inventors companies they have to be if they use this for the risk basis. So it's as simple as that the political will was quite clear in the Parliament where to go to. And I just mentioned here very shortly we can later come back to that. We also had the understanding in the beginning of this term that AI is only about data. So where to get data. So I was happy to be the rapporteur of the data strategy for the Parliament in the term. So we wanted to break that pose. So when you are an owner of an equipment connected equipment it doesn't create data if you don't use it. So if you use it should the value only go for the owner or builder of the machine or should that be the new economy for Europe. The data economy that more data will be available for more users for more innovations and even for the equipment user. So these have been like some motivation in this term in the majority of the Parliament to push forward innovative possibilities to use European data. Maybe I can also compliment that indeed we Europe has been one of the leading forces to both promote the benefits of artificial intelligence. It's used because we do believe in the great potential it has to improve our lives to promote competitiveness of our companies and at the same time also address the ethical issues ensure that these systems are trustworthy safe and it's been at the basis with these two objectives that we proposed as was said quite some years ago also the EU communication strategy but also the first legislative proposal for artificial intelligence in 2021 and we are now very happy that indeed globally there is already a big consensus that regulation is not only needed but it is actually very important both to support innovation provide legal certainty how we deal with these challenges with artificial intelligence and also to provide the competitive markets trust build trust very important in both users but also the society about the effects of artificial intelligence including to manage quite some of these risks that were mentioned and indeed today is an important day for the AI Act because we've reached hopefully the one of the key milestones final key milestones when the council will review it and then hopefully we wait also for the parliament and it's very soon adoption in spring and just maybe some elements how we tackled some of the issues that were mentioned how to both promote innovation give legal certainty but also at the same time ensure proportionality future proveness of the legal framework so it really addresses the risks but also contributes to the competitiveness of the companies and trust the basis of our approach is indeed the risk based approach we tackle very specific use cases with intended purposes specific applications as AI systems where there are very important consequences for people's life safety with specific requirements such as data quality as we know that it's quite important to avoid bias ensure accuracy of the systems but also general good practices risk management documentation so that that are quite well also recognized internationally and we did also look into them to the basic models as well because although we regulate mainly the applications we indeed with generative AI and these large language models we've seen that they're so important for the value chain and they are one of the essential components especially those that are big and important and integrated in many applications so indeed one of the key components during the legislative process have been to add special rules for this general purpose AI models including generative AI for the most part of them they are very light touch and just focus mainly on documentation transparency to ensure to facilitate the work of downstream providers of high risk applications but we do recognize also that there could be a very limited number also of quite high impactful and very powerful models that could pose systemic risks for the market and for them we do also propose a very proportionate and targeted rules that will be mainly implemented through code of conduct in a very collaborative manner with scientific community and all providers of those models all right but yes and so what I'm hearing is kind of a an understanding that you know a open you know when we're doing this regulation you know we need to be thinking about how to do this proportionately to you know enhance you know both maybe open AI open source AI and also at the same time kind of mitigate these risks you know thinking about maybe an historical perspective from the open source world right there was a lot of debate in the early days of open source around security proliferation of cryptography that you know the technology could get into the wrong hands you know Arno, Brian looking kind of backwards what lessons might we learn from the as we think about you know the regulation that's happening here and then in the United States of open source and those types of concerns that we've seen. Sure I'll jump into this first and then hand to Arno so I with respect to the moderator remember a moment when a certain technology executive called open source a cancer upon the technology industry because it would eradicate the potential for profits from technology I don't think that quite happened this was 1999 2000 I think something around those that that time frame instead it drove an incredible economy of people building on top right it was a boost it was a way to build a common platform that was shared and allowed are any of you familiar with Corey Doctorow's term in shitification apologies for this being recorded but it's the premise that sometimes platforms decay as the people who run the platforms extract value from them right open source doesn't have that phenomenon instead you get increasing returns by more people participating in the platform and building value on top right that's what we saw in the early days of open source where the standardization of the Unix landscape on the Linux kernel allowed everybody to move up the stack and create more value in the cloud and other places and then repeated with the rise of Kubernetes right meant that people could fight their competitive battles way up here where there is better value proposition than on a Mac versus PC kind of battle down at the at the bottom so open source has helped equalize and and bring a lot more competition into the market for the benefit of everybody and it has the chance to do that with AI as well as long as we don't kind of make a mistake I think at the at the grassroots there are also concerns about security right if everybody can see the code everybody can hack it everybody can compromise it I or or if the development process was you know a wild west anybody could throw back doors into the code and you couldn't trust it then in fact there are also concerns that if you gave people too much open source power when it came to cryptography well then bad actors would be able to communicate securely without anybody being able to listen in on their conversations and keep bad things from happening right some of you might even remember in the United States you could not publish cryptography software more powerful than 128 bits without getting and and ship it outside the United States without getting a permit from the United States government otherwise you were considered an arms dealer and actually 99 the Bernstein decision from the supreme court conclusively determined that open source software collaboration is an act of speech rather than an act of commerce and and put it firmly on the side of the cryptographers and the software developers who then develop things like open SSL and and and you know secure sockets and secure security into the Apache web server and all this great stuff so you know history doesn't doesn't necessarily repeat but it sure does rhyme and and looking at the current a debates through this we need to think about how do we allow for innovation upstream and and collaboration and freedom of expression and freedom of thought and freedom of development upstream while holding folks at downstream accountable for their actions as they deliver products and services to the end users right I think that's the right frame for us to understand kind of the history of of of open source and technology development and frame it towards AI and also look at can we use open source as an enabler of ways to address the kinds of harms and concerns that we all have right could we invest in digital public goods in the form of data sets in the form of software that could address AI fairness concerns in the form of you know data sets focused on addressing equity issues or equity concerns can we move away from using Reddit comments as training data I don't know if any of you have read Reddit but like I don't that's not how I want an AI to talk necessarily and towards the kinds of things like the Allen Institute has some really great data sets that derive from the Wikipedia derived from project Gutenberg and others like we can cultivate these things and build better AI by collaborating on the table stakes on the stuff that is boring in the hard chopwood carry water of building AI better AI systems and then allow interesting commercial models on top of that so I think what's interesting is looking back at what happened with open sources we learned over the years that they were clearly many different kinds of open source and so you have a whole ranch you can just you know put your code out there and say that's open source well if there's the right license that might be true but the reality is there is more to it than that because what we learned is what's important is open governance which means who controls this code right one of the things that we do IBM works a lot with the links foundation for instance because we have these foundations there are strict rules that provides this kind of open governance where there isn't a single company at the control of the source code the same is going to be true for AI we see a lot of you know models being distributed but you don't know how they were built you don't know what data was used behind it and so we really need to go beyond this and have a true open collaboration environment that allows us to work in this environment with transparency so that we know what's behind those models how they were built and that's a very important thing we have learned from open source which we need to adopt for AI if I can just build on that I see any lie in the audience here and he is part of the Linux foundations AI and data project one of the products of which is something that's emerging called the model openness framework it's not an attempt to define open source AI or to change licensing terms it's a way to say those 500 thousand models on hugging face how much you categorize them in terms of degrees of openness degrees of how much can you really fork you know in the way that open source licenses allow us to fork it's a way of recognizing licensing around data is super complicated but how might we bend the industry towards greater openness and so we're going to need new tools to help us understand this different landscape and this is one that I just wanted to highlight and so one thing I also wanted to add is from a regulation point of view and it also has to do with the efforts like OSI on defining what open source AI really is I mean one of the challenges we have is that if you if you're too high level it becomes moot it doesn't have any meaning or usefulness if you go too much into the detail you quickly corner yourself in a world where you know this technology evolves so fast if you look at the AI act I think they have actually so far found a right balance of keeping it a fairly high level so they have principles oriented regulation because if we look at GDPR for instance I'm a strong supporter of GDPR I think it it did some good but at the same time I worked in blockchain along with Brian for several years and the problem with GDPR it went too down far into the detail of the architecture of the database system based on some traditional architecture where you have control points and when we worked on blockchain we realized well it didn't work anymore because we don't have single control points and so the danger with regulation is to go too far into the details and so far again I think the AI act does a good job at finding the right balance keeping it high level principle what we need to make sure now is the next phase is going to be to develop standards that further define how this is going to be implemented and we need to be on the lookout to not fall into that you know mechanism again of going too far into the details keeping in mind that you know the AI act has been in the works for several years already it's going to take two years from the publication to be fully enforceable it's going to take several years for the standards bodies to define the standards that will be used to implement it meanwhile I mean the industry is going full on developing the the technology in ways that we cannot predict honestly we already see it in ISO GTC one there are some standards that have been in the works for a while they are already a bit obsolete because they're missing major components that just weren't there when they started it's nobody's fault it just goes too fast and so I think this is a true challenge as a society is to find the right balance when is it too soon when is it too late that's all yeah just one comment or thought I think that it is not only industry or big industry going ahead I see this and then this what I'm enthusiastic to be here that there are a lot of happening the small scale big scale everywhere and then the question of data and good quality data I wanted to bring the the example of the small language group I know lino speaks the same than me so finish we are 5.5 million not that much interested of the good training from the big workers so then there is an own model that uses the national library digitalized things that has been going on for 20 years then uses the supercomputers partly sponsored by EU I have to advertise the LUMI being the gtp geographic the possibility to count that the university has been collected the national broadcasting company has been created data and they do it perfectly as the AI Act wishes that this is the sources of data that we are using this is the open possibility for everyone to do it so maybe even other languages in Europe might do the same so it's not one way only that everything later will be dependent on the couple of players in the world but then you can use the different systems and develop so now it's the finished science involved in that one and very happy to see this happening so also we have to see that there is much more happening than the most common systems and the biggest ones I mean I'm loving this discussion because what I'm pulling out from it as I'm standing here is that you know the governance systems and collaboration of open source really matters and participation in that innovation cycle is what's being enabled by you know wide open governance and participation you know as we think through that and so you know one one we were just kind of talking a little bit about you know control in in in these models and I maybe just to kind of throw some concepts together on that right so licensing we're seeing a compression of the licensing innovation we've seen over the last 40 years into one for AI models where we have licenses that are trying to restrict uses for ethical purposes on the one hand and then we have other ones that are restricting competition if you use their model on the other Falcon Lama 2 being ones in that area and at the same time there's a question about what elements of the model need to be there so you can have this you know open source participation that we were just we were just talking about so you know with that in mind Steph what is the importance of OSI's work driving an open definition and what's what's at stake if we don't slice that definition right when we have you know regulations kind of referring to free and open source models right so well one one thing that I I like to remember is how open source as a definition and the principles behind it have been evolving or having a guidance to the evolution of software itself for many years and I think that for the my wishful thinking my my hope my secret hope is to to to have the same sort of principles established quickly so that the AI space can can evolve together having a framework reference that that carries the value of collaboration permission less evolution the the fact that we can we can immediately the the communities of developers creators of AI and and other stakeholders have an immediate understanding of what they can do and what they cannot do what they should do what they should not do when it comes to downloading an AI system modifying it and putting it back on the market together with regulators of course because I I I am quite sure that we have software has told us already we've gone beyond the the times when we could just deploy something putting it on the market and say if it breaks you keep the pieces I think it's fair to think that as we have responsibilities as humans creators of anything when we put it on on the outside we should think twice about what we're doing and why you know and how it can be we can enable abuse and things like that it it's not the role of the open source initiative to to think about those things but it's the role of me as a human to keep that in mind in general and you know you asked what what happens if we get it wrong I don't think that there is much of a chance to get it wrong as much as there is a risk of getting it late or not getting it at all in because like you mentioned there is there is many there are many licenses and there are many models and and new licenses are emerging and everyone is starting on the market to advertise their their models their AI systems as open source without that shared understanding and so the the the effort that that I'm doing is that we're doing with the OSI is to to build that shared understanding among different stakeholders that unfortunately takes takes time but I'm really pushing everyone to to come to a solid agreement because I think that we all instinctively know what that means you know we need to be able to use we need to be able to share to study and and modify systems and and now we need to very quickly come to uh practice the practicalities and what what is that we need to do what you know the open source the free software definition says you need access to source code in order to study and modify a program where we need to the equivalent of that little sentence for AI systems and I think we're getting very close to understand that excellent now the the EU AI Act that I guess is being voted on today very exciting provides you know an exception to some of the regulation accounting for open source development and that that exception happens when the power of the model is below some threshold number one and number two when the models are made accessible to the public and I'm quoting here under a free and open source license that allows for the access usage modification and distribution of the model and whose parameters including the weights the information on the model architecture and information on model usage are made publicly available um does this get it right does this promote innovation the way we might might like anything missing uh aren't I'll start with you Steph and others so again I think they they they they get it right in the sense that it's pretty high level they focus on use cases as opposed to you know trying to regulate the technology itself so from that point of view it's good of course these threshold you know they measured in flops they say oh it seems pretty arbitrary and it's bound to be broken so it will have to be revised but I think again this is part of this challenge of you know we you have to start somewhere and when is the right time to regulate is very difficult I think in general they are doing the right thing for now we'll have to keep up and make sure it doesn't do well maybe if I can complement indeed it's been quite challenging to regulate so fast developments and also many unknowns uh so um for those very powerful uh general purpose models with systemic risks indeed there has been one threshold now set but we've also tried to build a system that allows enough flexibility to take into account developments adapt but also develop performance benchmarked on the future the more we see how it develops how it performs in a collaborative process to keep it future proof because it's we have to catch up always with the regulation and the technological development and that was the the right way we we thought we can do it uh together with the industry community um and indeed quite importantly I think the AI Act right is one of the first EU legislation that tries to at least set some principles what these open source uh AI models can be exempted from the obligation so it's truly open source and we have a shared understanding what kind of transparency we do need for those models uh to benefit from these exceptions because they have already ensured the necessary level and we think that this indeed promotes a lot of um transparency in the community as a whole but also collaborative developments and in any case I just also wanted to say that the AI Act would not apply to models that would be solely for research purposes development prototyping so that's quite important we do want to make sure that innovation research and there is no burden at all or unintended consequences on the overall open source community and finally another important point I would also like to make is that the AI office which will be uh uh now created and actually it was the decision for its creation was adopted um in anticipation of the AI Act implementation um it will have an important role to be exclusively competent for the supervision of general purpose AI models but we also try to make sure that the codes of practices uh that we're going to develop to operationalize these high-level principles as mentioned are developed together with the whole community all providers including open source providers and in this decision that was just adopted for the creation of the AI office quite importantly we also commit to create a special forum for collaboration with the open source community because we do recognize and we want to promote a strong ecosystem and build innovation in AI one of the most important elements that we also see is that one of the biggest and players in the EU are now actually open source models and we do think that the community can build compared to other bigger players outside the EU now could allow these collaborative developments access to all EU resources uh supercomputers to to help EU startups uh and SMEs build competitive models uh uh and also uh align with with our approaches and one uh cool thing about Europe is that I think it has a lot of the legislative elements that that are necessary for a good AI uh there is GDPR managing all the privacy data uh there is um uh right to data mining that has been established in the copyright directive and and the AI act on top of it I think all the elements are in there uh of course everything can be improved and perfected but um you know the it's a testament to the the innovation that has been happening also on the governance side just can I jump in please this is very encouraging to hear I hadn't followed the last few iterations to the trilog so um there's a balancing act as it's preserved the right kind of space for open source but there's a different kind of balancing act that in the United States the Biden administration's AI policy tried to strike as well which was between throttling and restricting and containing versus enabling and steering and encouraging in the right directions and in the in the Biden AI policy which has not been a funded passet of laws or anything yet um probably won't be in this current cycle but I there is a proposal to put chief AI officers inside of every government agency right to drive AI adoption plans so that the IT offices are taking maximal use of this to develop data sets from government data that can be used to drive the development of LLMs for domain specific purposes in science and engineering in in in the health administrations and the veterans administration those sorts of things and this kind of capacity building is coming at a very challenging time you can't hire enough AI talent to fill all of the roles that it called for without completely bursting the budgets that normally are put towards government employees but I this is an important really really angle on this how might the public sector become a stakeholder in these technologies develop an internal capacity for adopting these and in doing so be a part of steering it in in the directions that address many of the harms that people are concerned about out there have to adhere that the data act was the the final one the finalized that all the public created data should be available even the sensitive one with the second use data from Finland from health was the kind of model and thinking there that how to get access to the public data to be able to use it and train it so that is not news anymore I'm I'm I'm seeing the yellow light that we're almost at time and so what I what I'd like to do for for the last question is kind of just say you know there's a there's a signal that there's a from the EU AI act and from our conversation today that there's a perceived public benefit to open source AI and so thinking you know more broadly looking forward we've seen that open source has shaped innovation over the last 20 30 years globally thinking about Europe thinking about the United States thinking about the globe how do we think what's the hope for how open source AI might you know might might shape the industry and in the world moving forward staff I'm going to start with you and we'll go down the to the end well thank you I my my hope is that we find legal frameworks that work across the globe as soon as possible to cover two things specifically one is the nature the legal nature of weights parameters anything that goes after the training that's created up by the training that I think is something that I I'd love to see to see defined because and defined at a global level and together with the clarification about data mining because I the the two the thing that open source had built upon was the the burn convention so the the fact that there is a copyright law or drug doctor a more or less very similarly applied in many parts of the world allows for this open source to be a global phenomenon in in AI given the fact that we have data and we have these new artifacts are the model weights model parameters requires new legal frameworks and my wish is that we quickly get to the point where we have a global understanding of what these permissions are what is allowed what is not allowed so that innovation can really happen in the same way that open source has or no so quickly one thing I want to say is that you know I see a lot of effort in Europe there's this reaction oh we need to have a European initiative to promote you know whatever initiative is and in this case open source AI but what I find unfortunate is often it translates into we need to start a European focused effort which in fact comes into competition with other global efforts in and this is a a misunderstanding of how you can actually influence the the global efforts by actually contributing in the influence is in the people who actually work in those organizations most of them like many of the Linux foundation organizations are completely open with an open governance you just have to show up and so I really want to encourage European companies to do that I I do hope that the AI act contributes both to innovation legal certainty development including for the open source community but also responsible trustworthy AI and we've seen a lot of good practices and will continue to work together with the community including for the early implementation of the AI act through initiatives we launch like the AI pact with companies and the AI office and we do recognize also that one of the key focus now should be to work together and we do with international partners to ensure this convergence and common understanding and we do so in OECD UN with trade technology and bilateral partnership so that's very important focus for us now my hope is that open source is recognized as a way to address two of the biggest sources of risk in the adoption of AI technologies one of those is the concentration of market power I open source has been about decentralizing that power by allowing technology enabling technologies and platforms to be available to to anybody who is willing to pick them up and adopt them and a lot of the risks that that are articulated out there come from unequal application of that power what happens if one company moves too quickly or or if the rest of us are sub are able to use AI but only through an API where the interesting stuff is on the other side of that API the Linux Foundation recently published a a survey on generative AI where we talked to the top IT leaders and asked them how do you want to use this every single one of them wants to look behind the API they want to have a model the ability to build their models correct their models modify the data underneath that understand how it works and adapts it adapted to their organization's needs the kind of thing you can only do with open source the second part of this though and addressing aya harms is to some degree to fight fire with fire let's talk about misinformation right using using content labeling techniques like c2 pa to identify authentic photographs authentic documents to try to fight misinformation is an incredibly important project c2 pa is a standards effort that's part of the Linux Foundation but but very much integrated into many open source projects but being able to to use that and and another angle to this though is you know we've been fighting spam with AI technologies for 20 years anyone who's used vipples razor or other anti spam tools know that it kind of builds its filters out of studying databases of known spam in order to fight misinformation we will need AI tools very local to the end user to be able to help them navigate the flood of information of false information find the signal in that noise and navigate that more intelligently that's just one example of where AI could be used to fight many of the harms that many of us are concerned about arising from AI and that only is possible if those tools are broadly available thanks to open source hey petra yeah i've been 10 years now in the parliament nine and a half so the first mandate was very critical infrastructure how to reach everyone no no one left behind how to get the 5g to get the trusted networks and so this mandate so much on the data data availability the content and then we read in the end of this term that the world economic foundation saying that it is the disinformation that is the biggest threat so then it is where and how and for what use the technology is there some room for democratic decision making and and i'm sorry the europeans need to also look at the european jobs and european well-being which is a i hope that i'm the one speaking in favor of the data movements globally and and and having these ideas yeah it takes two to tango but the good competition is always better for the markets and solutions but then also monopolistic structures very seldom are good and that's why also how to build on other possibilities it's open question open source been so far so good challenger i always welcome the challengers for the making the broad possibility that is called market economy i haven't seen better ones so that's why i keep on working for these opportunities for everyone thank you well thanks everybody this was a really engaging wonderful panel with lots of strong participation thanks very much and thanks everybody for for having us