 And I'll start with the mission, which is quite easy to view and reinforced, but before I start it, by the door. We are the steward of your first definition, and emphasis on the steward means that we don't define the definition in a market. We would maintain it for the community. So, if the community wants to change it, like so many people are yelling, but are angry, all the time, you can change it. And we are a public charity company, which is something that I am very enforced, especially because so many other corporations, organizations, have still a nonprofit that can work and behave like a trade association. We are by definition about our fight laws and the fight that we are included in the United States as a 501c3. And we are working to make the difference among the people in the general, not all the members of the community. This is the distinction that is worth highlighting. Because, again, people yell at me, they are saying, you put the money from Google, put the money from Microsoft. Therefore, you represent Microsoft and Google, which is not true. Technically, in an market, I speak about it in an ultimate way. And historically, I have been on the other side of the world. And I have been protesting against the right of corporations before I joined the U.S. association. And our mission is, our mission is to be the leader of all the different governments around the whole system, and the principles of the different governments and sources. And we will help the U.S. system to try to build individual practices, and the practices and ideas in open source for the implementation, and for the benefit of the, because we believe that that is not going to be the benefit of the society as a whole. And the other part of the logic is that we want to build bridges. So, we want to create a class of different B-discretes, we want to be on the big end of research, and society, all the same thing, so we don't really have any mission. I think it's a community goal that everyone wants to make. And then despite being established in the U.S. state, we do have a global bridge. So what we do in America is, we maintain your consortium in Asia, and with the new licenses, we're going to use the principles, using the better hand than the U.S. consortium in Asia. And then we're going to use this for 25 years, or five years, for a long time. So this is a project we've been doing for a very long time, which is, we're going to talk about it a little bit more. We're going to move the hands, and then we're going to use it like a series of steps, and then do it after a long enough, and then as they get in the night, and there are two areas where the refugees are monitoring and working with all the members established to review it and help out them about policies that they don't refer to the ecosystem, and actually promote and facilitate collaboration of community and agricultural borders. And we will help review the standards in making our system. that we are members of a standard setting of communications like at CISO, like in the sense of making sure that when the standards are set and adopted, they don't include innovations that would prevent the communication of the standard that people don't want to support. And it's a really crucial piece of the activity that is not only visible, because when all of the communications are happening, as I said, we're losing a lot of secret. But I want to make sure that we allow for that activity to happen and the damage done to us is great. Quick reminder, your research definition, I used to call it a poor worded way of the 43 systems of the CISO Foundation. Fundamentally, they're the same. But the checklist of your research definition is a little bit more practical. And what we use, the community uses it to give you licenses of how they come in. And it's really just a checklist. One couple of interesting points. Highlight here is the two points of non-discrimination in person or in human endeavor. These have been coming often under stress and under the thought in the recent world. Because the decision why you are trying to make up is because human first definition has enabled the same aggressive flow of an internet to protect that allows for recommendations that we would do. So I'll talk to you again, for example. In a very, very human way. In a very, very human way. In a very simple way. You can do the loading application from a light line or from an internet. And you just do the license for something with a code. And then that code corresponds to a license in which it will be applied to your social media. And then it will have to be there. So you can do the online application. You know that the code that you're downloading grants you those 10 commands. It grants you those 10 commands. Without it, without any of those 10, you must have your legal department and they must check the license first and see if you're allowed to send them off. It's very crucial because in the past few years, more and more groups, corporations, independent people, individuals have been telling us that the open source, the word open source is not directly tied to the open source definition. Because, well, you have no trademark. Well, we may not have the trademark, but the thing is that once you start polluting the definition and you start polluting the meaning of the word that, for example, this conference carries and understand very easily that open source is software that has been licensed with a license that has been approved by the open source initiative. Then you're creating doubt. You're creating friction inside the environment. And you're creating troubles down the line. And these ambiguity, these troubles that these groups have been putting are dangerous. And we need to take really good care of defending the common understanding of what open source means. And the reason for this is because the validation that open source means what has been approved by the open source initiative comes not just, comes from a very wide understanding, even policymakers. So these are top level government organizations that understand and assign to the open source initiative, to the list of licenses that we approved, assign the value of the organization that I represent. The Italian government, the UK government, even the European Union, without saying explicitly because they can't. But once they presented, they developed a new license to use for all the software developed and released by European Commission, European Union institutions, the European Union public license, they wanted us to approve it. And they celebrated when we approved it. So delegating to us that importance and the trust for the authority. So how we are maintaining the definition and how are we responding to these common threats, but also how you're maintaining this authority by doing what we're doing in a very humble way. So we want to maintain and keep on enforcing the definition to maintain this honesty and advertising, meaning we have noticed a lot of, when we notice corporations or groups that are using the licenses, creating new licenses, and then in their GitHub repository, Git repositories or on their publicity material, their advertising material, they call it open source. Then we nicely go back to them and say, please, be nice. Open source is not what you mean. This by this, call it something else, but be true. And luckily, we're also starting to see courts noticing this and assigning the reinforcing the meaning of the word open source as using a license that has been approved by the open source initiative. So we're going around and monitoring this being very, very active. And the other way that we're doing this is by working with having this project called clearly defined. Clearly defined is the repository of metadata that increases the information that is available into individual packages inside the largest package managers, like NPM and PyPy who's come to mind, where a lot of these package managers don't have a strong enforcement of licenses information inside the packages. So individual developers can pretty much swap any way or form. Nobody checks whether the licenses are correctly applied inside the repositories and they get distributed. What clearly defined us is to provide an additional layer of information that's crowdsourced information, vetted and validated by the maintainers of the metadata repository so that your CI CD, your S-bomb material software, your scanners, your license scanners can rely on this information more clearly. And we are reinforcing this project as four years old is in production at Microsoft, Bloomberg, Windows, Siemens, and SAP also are big users of it. And we are hiring a community manager for this. So if anybody is interested in working with us, let me know, reach out. And the other activity that we're doing to reinforce our understanding of the licensing information is to look at new technologies, how those new technologies are affecting the 10 principles of the open source definition. AI is one of the new technologies that is deeply changing how concepts of data and software and source code are intertwined and mixed in a way that the software that we had in mind when we wrote the open source definition doesn't have. So we launched this event, which is in three parts. It's ongoing. It's still not over. The first part is a series. It's a podcast series. And I highly recommend to listen to five episodes and with interviews with experts to kind of that will set the stage for the second part, which is happening next month, panel conversations, four panel discussions with three to four experts each of the panels. And we will dive a little bit deeper into the aspects of the legal status of a model. Since it's not covered by copyright, most likely, what is it under? How do you understand whether you can share it or modify it? And how to modify it, which will be also addressed from the business perspective, from the perspective of researchers in academia, and finally from the civil society as the ones who will be judged by the AI, whether we are deserving alone or not, then what are the rights that we should be exercising? And this comes from my experience from when open source started to become a thing. And we had an answer to the politicians that were starting to deploy digital systems to interact with citizens. Like if you wanted to apply for a credit card on one hand or for an ID card and you needed to go to a website and the website was only available for people using Microsoft Windows with Internet Explorer and ActiveX plugins. So the combination of that proprietary software to interact with citizens, we had an answer. We said, look, you have to be able to interact with citizens with open source software. Now, if we say to the policymakers, if someone is denied alone, you have to give them, you have to give the citizens what? And that's why we're doing deep dive AI. On the policy front, we've been working in Europe for multiple years now, but we're also expanding our reach. We announced that we hired this week a U.S. policy director, Deb Bryant. She's a very well-known leader of open source projects. And she will be working a lot with local institutions in the United States, local agencies and also federal agencies to teach them about how the value of compliance, the value of open source collaboration and how to do open source. Because if there is one thing that I noticed in the 25 years of existence of the open source initiative is that we have many other ways in software, there is a top of the spear, top of the pyramid set of people who really deeply understand what's happening and why and how to deploy, how to collaborate and how to use licenses. There is at the bottom of the pyramid lots of people who have just barely heard of this. They come out of college, they have played with Apache software, they have played with Linux software, but they have not really understood exactly what the whole means and nobody taught them. So we have some catching up to do and we're gonna be doing this with thanks to contributions and donations and goodwill of Deb Bryant and Simon Pipps in Europe. So I was hired one year ago and I'm the first executive director of the open source initiative. This means that a lot of the processes inside the organizations have to be created. A lot of the programs have to be reinforced and these are the three big buckets for me in 2022. And I'm glad to say that, so improving our outreach and advocacy team was one of the main things that it's in my to-do list, reinforcing the projects and fixing the operations inside. And it's at the same time as building everything else. It's like fixing the plane as you're flying. So I'm happy about a lot of the things that we've done. So we managed to launch a new blog and I encourage you to look at it because it's called Voices of Open Source because the intention in here is to work with the community and make it available as a platform for the community. So affiliate members, affiliate organizations which are part of our network of friends would say they have access to this and in fact, we've been publishing interviews and publishing stories about affiliates, but it's also available for partners and sponsors and individual contributors within the framework and the limitations of being a charity organization. So we can not advertise your products but definitely we can talk about stories of how you deployed, how you train your new staff, for example, new engineers, how you teach them how to use open source or how your customers are enjoying the fact that you are working with open source software. And the one thing that I'm really happy about is we removed Google Analytics from our systems and went for a European company called Plausible. A new website is also under development and you may say, what's taking you so long? Well, I tell them myself, limitations but we have a good team working on this and these are the focus areas. So on the advocacy and outreach that I spoke a little bit about, mentioning the AI work and on the licenses front, I mentioned clearly defined but there is another project that is brewing and I will announce it by the end of the year. We want to streamline and simplify how the core activity of OSI has been done. So licenses right now, they are published as static web pages and a lot of tools out there are relying on scraping, web scraping to understand whether a license has been approved or not, which is really, really, really so in 1991 we should move on. And so we have a plan of actions to transform this into, we're doing two things actually. One is to put the list of licenses behind an API so that tools like SPDX or software heritage can automatically check and deliver the information about the license, the approval of the license or not. But the other thing that we're doing is to validating that database to make sure that we haven't missed anything because you may imagine with lots of volunteers over the years maintaining that list, there might be like one or two that have been missed. So we're gonna have to hire some contractors with some legal knowledge to understand, to go through the minutes of the week and build a database. It's gonna be a little bit painful and boring, but we'll do it. And the other thing that we wanna do is to change how and create like a reliable archive of those decisions because the discussions and the knowledge about that, the discussion when a license was reviewed are now in a very opaque, I must say, and loosely connected archive of mailman. All the Piper mail archives holding the information about why a license was approved or why it was denied approval. And we need to build that kind of information back into the license text itself. So that's a little bit more complicated to achieve, but it's also something that we have sketched out and we will start the conversation with the community very, I'm hoping as soon as we're done with the AI project. And then going into the future, we wanna change how we review licenses. Like right now they are happening, like I said on mailman mailing list, it's not exactly the right way to do it or at least it was the right way at the time when we had, we knew no better, we didn't know better or we didn't have any sophisticated tools to do it, but we do have it now. We don't have ways to annotate text and keep that for forever as archive. And okay, so and these are the other, that's the other area that we're working on on to the expansion of our policy programs because we see the activity in related to security and the fact that open source software is everywhere. Finally, policymakers in Europe and United States are waking up to the dependency. They need to understand what's happening and why and we need to avoid for them to, for us to be chasing them at the last minute to avoid them from, for example, banning the export of all software from one way to another. And we will be stopping collaboration that has been creating this wealth of software. And we'll do more, I have a lot of other ideas about how we need, we want to engage with the members, with the community in general. Right now you may have noticed on the website there isn't, there isn't, there are many ways to interact within members and staff, for example, or just to collaborate lots of the conversations happen outside on Twitter or on different Slack channels, different IRC channels or mathematics channels. And we just need to have a better way to interact with each other. So it's, it's also on my radar and my to-do list. And on the training and education side, we do have a partnership with, we helped run the ISA, it's a university in United States to create a training program for project manager, program managers to understand open source. That program, training program has wonderful reviews, but not as many subscriptions. So we need, we're working with Brandeis to refine the way we reach out to potential trainees or people who are corporations rather that want to offer that as a training package. So if you have, if you work at an organization and you would like to see your company offer more training to product manager, project managers about how to do collaboration in open source, it's a great program and I'm welcoming your contacts. So I can put you in touch with the university. And I would stop here and I would take questions from you. I wanted this to be more of an interaction with you, what you want us to do. Are you convinced that this is a good idea or are we missing something? Do we? So the question is, we partnered with, you're mentioning the one that we did last year. I don't remember now, is it the second? No, not with someone. Oh, the partnership now is with our God. Cut this, oh my God. It was a survey, but it was done with a company and we helped them refine the questions and distribute the results. Yes, that's a good question. In fact, we are in the process of reviewing the questionnaire now. And so what's your suggestion? What would you like to see? The raw data? Yeah, yeah, yeah, right, right, right. Yeah, of course, there is an interpretation going behind the presentation of the result. That's a good comment. Yeah, I'll take it to the team and we'll definitely discuss that, yeah. Thanks. Think about it, you know, it's daily. Finally, I call to like, this is the one thing. Some of the names just have to be tracked over time. So the interest is very much in, exactly what do you think you'll be delivering in the process you're going to find? Because I thought it was literally interesting, you're asking what the rest of you are. So you just have a roadmap, what's actually going to be delivered there? It's just a bit of an idea of what's happening. So clearly the find now has, it's already in production. So it has APIs and it's got all the information from the consumption perspective. So you can look it up now. I think it has partnerships with a bunch of OSSA, Phosology, I don't know what other code scanners you can plug it in. I don't know if that's your... Canonical names for some purpose? Canonical names for the licenses? That's a good question. Yeah, for canonical names of the artifacts. So you can recognize in a package, science something is actually this one corresponding. I'm not sure. Honestly, I'm not into the weeds of the users of clearly defined, but that's something that we can definitely, I can bring it back to the team and discuss it because in the end, this is all driven by adoption. Right now, I think it's satisfying the needs for, I know that they want to fix all the time something like Microsoft inside Bloomberg, but more participation and more comments. Definitely, it's worth bringing it up. Thank you. Yes? Yeah, right. So okay, so it's, I understand correctly, your comment still, it's referring to clearly defined and being caring the information about the license and the seal of approval from those. Yeah, okay. Definitely, definitely something that it's nice to have. So the community of clearly defined meets on Discord a lot. They use it very, very heavily, but they also have a presence on GitHub. So lots of conversations can happen there, but I will definitely relate that to Carol Smith and the others leader. Yeah. Anybody doing AI here? No? Yeah. Yeah, I do have some ideas. I spoke about AI yesterday, but I can give you the gist. I think what's happening is that there is a huge, there is a huge push and pull forces. One from research and academia, they've been working on these problems and they have been playing with small data sets. They have been playing with small models. They have been doing so many interesting things. And what's happening is that all of a sudden there is a huge amount of data available coming from corporations or coming from groups that are capable or able to assemble these huge data sets. Huge data sets bring larger models. Larger models bring better results or faster results and they're very exciting. This means that the, these AI models start to become really good, really appealing. Oh my God, let's put it in production. So they go into production and they started to damaging things at the very fast pace. So we need to, what we're trying to understand with the AI, oh, sorry, at the same time, the people responsible for these large models and these large data sets, they're scared, they're afraid. So my interpretation, by reading the room from the outside, my interpretation is that the fear of these models getting out of control is pushing them to write licenses, so releasing the code, but then writing a license that says, you cannot use it to do harm. I'm being very blunt, but it's a little bit more sophisticated than that. But they understand that the AI can do damage, so they're limiting the uses of AI, then their AI. And so with these new licenses coming up, we're gonna have more and more of those because the temptation to deploy more AI, to deliver more AI, there will be more temptation to write contracts and write norms, write law cases. And we as the Open Source Initiative, we need to, we wanna help them. We wanna help them understand the space because before you have a huge field of licenses and that you cannot really use because they don't enable collaboration, they don't enable sharing of information, they don't enable sharing of weights or all the pieces, the source code, whatever that is for AI, if they don't have that freedom, those freedoms defined and clarified, the field may slow down instead of making fast progress. So it's a way for us to reaching out and we're gonna have in the panels, to reaching out to other communities and convene a conversation with us so that we can help them. See, traditionally we haven't covered data, that was the realm of the open data movement, open knowledge movement, open access movement. And in fact, that's one of the challenges of AI, it's that it brings together software, knowledge, data and we need to think about all of that holistically. And everything can respect what's in the data. Yeah, yeah, yeah. And the gap is very visible to me on the images front. It's at the same level. I put my picture on Flickr a long time ago. Now my picture on Flickr a long time ago is being used to identify me on the street if I'm going for shopping or on a protest or something. Did I give the right to Clearview to use my picture and sell it to the NSA as a model? I have not, not explicitly, but at the same time, the EU has pretty much given a blank cover defining the right to data mining. It's only available for nonprofits, for research purposes, but the data set can be built upon anything that is available on the internet, unless someone says no. So all the past, it's available for data mining in Europe. Yeah, yeah, except, yeah, it's a new, it's a new right. It's like when the EU established the right to databases. Yeah, so I only have one minute left. It's, I can, I would continue the conversation with much pleasure outside also. Over lunch or something, yeah. Cause I think that the, we are at the, when software was created, I mean, in the 80s, when the open source definition came out or the free software definition came out, software was a brand new primitive, a brand new artifact, it did not exist before. And the community of researchers in, funny enough, the AI lab at MIT, they created the social norms first. They saw the danger of the privatization of knowledge that was coming out of the labs and getting into the private enterprises. And they created the GNU Manifesto, Richard Solomon wrote it first, then he wrote code, and then he wrote the license five years after. So I think we are at the same stage now where AI is escaping and the labs and getting into the, in production and banks and governments and stuff. And now we need to establish those norms. Like what's acceptable behavior? Why is that acceptable? What's the purpose of having, enabling sharing and stuff? With that, I got a red light. Thank you very much for coming. I got stickers here if you want.