 Hi, everyone. Thank you for coming today. And this presentation is about building ethical AI, the power of open source and education. And basically what I will share with you today is about our activities at Linux Foundation AI and the Generative AI Commons, which is an organization within the Linux Foundation AI organization. And I would like it to be interactive. If you have questions, please ask the best questions. Whoever asks the best question will get a t-shirt. So pay attention. I would like to share a few words about myself so you understand why I'm qualified for this talk. So basically I started my professional career. I was a PhD in computer science in Ben Gurion University in Israel. After that I joined Amdox and worked in the product management domain for several years. After Amdox, my corporate days, I moved and started two startups, one in the patents domain and the other one in the privacy compliance domain. And currently I'm doing mostly consultancy, which means that if you need help, or if you're hiring for a full time, I'm also in the market. But in between all of that, I also worked on Linux Foundation AI. And basically I was one of the founding members of this organization. We started it in 2018. And I've been there, like redlining the bylaws of this foundation and then leading the foundation as the technical advisory council chair for a couple of years. And back in those days, I started the ML Ops group and led this group for a couple of years. And also we started what we call the trusted AI committee, which is a committee that is focused on trusted and responsible AI. And I joined this committee when we started it I think 2018, late 2018, maybe 2019. And I'm still an active member of this committee. And lately, like two, three months ago, we started the GNI Commons, which is the other organization. And in the GNI Commons, I lead the Education and Outreach Committee. So the entire discussion or the entire talk today will touch all of these. I will share with you what LFAI is, LFAI and data is, and what is the Generative AI Commons is, and we'll finish with Call for Action and Participation. So let's start with what is LFAI and data. Initially it was LFAIDL, and then we changed it to LFAI. And then LFAI data, we integrated the data side. And basically it's a nonprofit, of course, Linux Foundation, an organization that has critical components of the global AI and data technology, infrastructure. It brings together the world's top developers and end-users and vendors to identify and contribute to the project and initiatives that address industry challenges for benefits of all participants. Very long description, but I will break it into some more specific things. So on the LFAI and data, we have the projects. We currently have 58 projects, AI and data projects, open source, of course, and we have the organization or the governance bodies and committees. So on the governance side, we have the governing board, of course, and under the governing board, we have the outreach, legal, strategy and budget committees. And on the other side, which I think is more interesting, there is the Technical Advisory Council, or committee, and this committee or council meets every other week and approves the projects and basically does all the work, and under the attack, we have several other committees. The latest one is the generative AI commons, the trusted AI, both of them are pretty active, as I mentioned earlier. We have the ML ops, ML sec ops, which is kind of new, like two years I think, BNAI data ops and ML workflow and interop. So plenty of activity and basically the community that is part of Linux Foundation AI and data could either contribute to specific open source projects or be active on those working groups. And this is where I am. When we started Linux Foundation AI, I worked on one project that was the first project that we basically started the foundation with and now I don't work on projects, I personally only do committees. Some key stats so founded in March 18, I've been there, nine active committees, 650 contributing organizations, so not all of them are members, maybe like 10%, like 66 members, and we have members in different tiers, 22.6 million line of codes, line of codes, sorry, across 58 projects, 30,000 contributors, active contributors, we have overall something like 100,000 contributors and many GitHub stars. These are the current projects that we run on Linux Foundation AI and data and you can see my, this one, Acumus was the first project. Now this project is in Archive tier, I would call it and we have the graduated incubated and sandbox tiers and you can see plenty of them. This list or this figure was derived from this thing, if you're familiar with the LFAI and Data Landscape, which is something that we stole, it's open source, we stole from CNCF, so this is an interactive tool that allows you to see all the projects and to play with it and you can use different filters to see whatever you want to see. This is the QR code to the landscape and if you're interested, I encourage you to try it and play with it. Let's move on. In the rest of the talk, I explain Linux Foundation AI in general, now I will go to the trusted AI and then to the commons. So what is trusted AI or what is the trusted AI committee? The trusted AI committee is focused on everything trusted, responsible AI. We do basically several things on the promotion, promoting the trusted and responsible AI. We have webinars, we run a Linux Foundation trusted AI day, we are about to start the podcast, write blogs, and we run all the technical integration between the different projects. We also host several projects, open source projects in the trusted domain and the current projects that we have are adversarial robustness toolbox, AI explainability 360, AI fairness 360, and intersectional fairness. And we also work on the AI software bill of materials. This is also an open source activity to create a bill of material for trusted AI implementation of open source projects, models, and stuff like that. I mentioned the technical integration here, so this is, oh, sorry, this is the technical integration. And in the past we also worked on the principles for trusted AI. I don't remember when we published it probably like four years ago, and in general the trusted AI committee, when we started it, it was pretty hard to schedule meetings because no one cared. What is trusted AI? What is responsible AI? What do you want? And basically in the past few months there is so much interest and we moved from, sometimes we have a meeting once a month to okay, we have a meeting every two weeks or more than that and a lot of activity coming in and participation from different organizations and different projects come. Okay, help us. What do we need to do in order to be responsible in order to be a good player in this domain? So there is a lot of traction in this trusted AI committee. Let's talk a little bit about the generative AI comments. By the way, any questions so far? Yeah, go ahead. Yes. Yeah, but no one really noticed or cared about it too much and now with Chagipiti and all the hallucinations and the problems that we see like it's much more visible. So this is why we get more traction and more activity. Okay, so the generative AI comments, this is an initiative that we started I think three months ago, just a new initiative and still trying to understand what we are and why we are doing that. Basically, to promote let's say trustworthy AI, generative AI, and to do like community-driven open membership initiative represented by non-profits, academia, industry, and in a natural forum, Linux Foundation AI. We decided to, this is our mission statement, but we decided basically to run this activity in four different work streams. So we have four leaders for each work stream and each work stream is focused on different things. So the first one's frameworks and this is the, maybe the original work stream focused on trusted responsible AI. So this work stream is collaborating quite a lot with the trusted AI committee as well. They work on something really interesting, a model openness framework, and I will present it in a little bit more details in a minute. We have the applications. So everything that is on the application there is another work stream and you can see Austin AI applications, all the vector databases, agents and so forth. And then we have the next models and data. So everything that is associated with models, Austin models and the data associated with the models. Initially we started separate and then we decided, we realized that they are like one thing, so we combined them. And the last work stream, which is the best because this is mine, is the education and outreach and basically what we do in the education and outreach is to, again, promote the responsible and trustworthy AI to educate the developer community and the general public and to basically promote anything associated with the generative AI commands with whatever their target audience is. And also to work with legislative organizations and to promote open source and open science in the different countries and organizations or whatever it is, European. So I said that I will say something about the model openness framework. So this is the model openness framework. It's basically coming in in order to reduce the confusion about what is open and what is open source versus open, just open or open science. And what we basically do, we take into account all the different steps and all the different elements in building an LLM or generative AI solutions and we score them and we give them different tiers or different levels like bronze, silver and gold and this should represent how open they are. So this is something that we started to work on very recently, like a few weeks ago and I think this will be something pretty important for the community. Okay, let's move on to the education and outreach. So this is half of the theme of the education and outreach committee. Basically what you see here in one of our sessions, I decided that I want to do introductions between the team. I asked everyone to send me three sentences about themselves and I generated with their pictures and everyone presented themselves. So currently you see here how many 12? We have something like 25 active members. We meet every other week on Wednesday morning and we basically, all of us are volunteers and we promote the things that we believe we want to promote. It's an open source organization. So this is the target audience or even ordered, prioritized target audience so first and foremost we target the developers and users then the media because we believe that if we can convince the media about the importance of open source and open science and so forth they will help us to promote it with all the rest of the audience. We want and will create content that is targeting the general public because we want to teach them or to promote our ideas and make them understand why we are here and why they need to understand. I'll touch in a minute. And then all the rest, we also communicate with government bodies. We already started to put some documents about it as well. These are our goals. Yes, go ahead. Good questions. We started, we put two documents one with NIST and the other one I don't remember but I have it here and we didn't get any response. So I don't know. It will probably take time and it really depends so if someone is excited about it and someone wants to do it, it will happen. If not, it doesn't happen. This is like a volunteering community. So it really depends about the members and what they are focused on. Yes, of course. Thank you. So the goals, as I mentioned earlier promote an open source first GNI mindset among developers. So developers are the first target then promote responsible use of generative AI and this is also developers and policy makers and regulatory bodies. Then the general public. This is where I am more passionate about. So teach the general public about generative AI and in order to democratize it and also to give them the idea about the impact that generative AI can make on their lives. And this is, I think, so all of us we know what generative AI, we use HGPT but there is a huge part of the population that doesn't know what HGPT is and never seen that and maybe don't even care because they don't know I personally believe that it's important. Here are some activities that we already started doing so we started to create a taxonomy a glossary of gen AI and trusted responsible AI. So this is something, this is work in progress. We have a few drafts already. We are working on a state of open source generative AI report and this is something that will be an annual report or at least we think it will be if someone will take it. And we also started to think about the developer community and how to approach it. We are starting to do some research about it and maybe some of you will get some questionnaires to answer. So early on in the presentation I told you that I'm currently a consultant I don't work for any company that is sponsoring my activity here. So why am I spending the time on this in general coming here from New York in order to present it and basically I have two good answers. So I have three kids and I believe that this technology, generative AI and of what we are talking about can bring us to two different futures. One which is very nice and everyone is happy on the left, right? And on the right something less exciting and I personally am pretty concerned about generative AI I think the risks are high and I want to do two things. First I want to educate myself I want to be in the forefront of the technology and the information and basically to understand the risks myself My son on the right is a junior in high school going to college in two years I want to be able to help him to select the right thing to study because he wants to be software developer So as a software developer what should he go to do in college? Computer science or English maybe because to be a very good prompt engineer maybe you need to have a better English I don't know but I want to be able to help them and I want also as I mentioned earlier to educate the public so everyone has some knowledge about it and basically I believe that this will help us to promote more safe use of AI in general if only a few very large companies are promoting things that may be bad for the inter-society this is something that I want to take part of and not leave to someone else to solve so this is why I am part of this activity and just to finish this is as I mentioned a few times in this discussion in this talk I think that this is important this is an open source organization which means that we are open to contribution from anyone and we would love you to join us and help us with our activities I will finish with this slide on the left this is a QR code of Gen AI Commons new website that we just launched and all the information in order to connect to join to contribute is there this is one and my LinkedIn profile on the right I'm happy to connect with everyone and if you want to join us and contribute and you don't find the information there I will work with you on everything that is needed to be done Questions? Wait, I said it in the beginning so I have shirts the best question gets shirts and stickers go ahead I come from the banking industry and I kind of understand the mindset of regulators where they want somebody that they can then tell what to do and hold accountable and the idea of open is somewhat contradictory to that because they can't hold anybody accountable for adhering to regulations but why don't you give us a little bit of your thought on why open is better why it's important to educate at this time when there's this much attention on it and why kind of getting support behind opening these tools up for access not just use but also access it's kind of a better approach than saying well we better leave it in the hands of a few bodies that are heavily regulated by the government So, good question and we had a long discussion about it yesterday in one of the sessions there are arguments for both sides I personally believe like every other technology that we had in the past open is better this is my opinion and I think that we can maintain responsible use of AI even if we go open so this is my opinion and of course it will democratize it and will not leave the power in a few hands but I agree it's an interesting question you get a t-shirt Hi, so we have a situation right now in general where college professors high school professors are using things like Turnitin to detect AI not so much because the fact that they're using AI but because they want the kids to learn the actual subject matter it's not so much an ethical question whether it's a they're going to learn it question looking forward when they're in the job world they're going to start being expected to use AI because a manager is going to say hey why aren't you using AI it's much more efficient and you can save cost it seems like that's out of balance what's the answer ethically to bridging the gap between an educational setting where you have to you know what I'm saying you have to assert that you attest that you have taught the kids this thing whereas they're not going to use it the same way when they get out in the job world you want to answer? go ahead nice lead-in so I work for a museum in Seattle and this is something that we're confronting right now because this museum is in the aviation industry and we have junior high and high schoolers coming through and doing pilot training and so what we've had to do because we use turn it in and that turn it in detects the AI we've had to write some policy around it saying we understand and recognize that it's there but for the intent and purpose of this particular course study you can't use it and I went out and did a search there are no less than half a dozen universities out there Stanford, MIT, some of the others that they have to look at the various courses and allow for that study to come in so I think what we're going to see is that gap is going to get a lot shorter as kiddos come up and they go well it's part of this now and it's industries that are going to have to catch up with it but in academia at least I see it on the museum side it's already happening Yeah, my kids use CHETGPT all the time and but it's, you know, it's new there are no really guidelines how to do it and I personally use it like off of my work I do with CHETGPT and I think I'm so much more productive that it doesn't make sense not to use it and by the way many people that are not using it are staying behind, right? or organizations that don't use it are staying behind We still have the situation where and maybe I didn't hear the answer the right way in the future it's going to be it doesn't matter if it's authentic or not as long as it gets done what kind of ethical eddy in the current, you know, how does that little not resolve itself out over time? So we've got a major issue if I want to hire on or I want to socially engineer a new engineer, right? If I'm trying to get just the best talent from around the world I am doing my best not to signal process for anything but the work so at the point where I'm trying to hire the best if I've invented a training program for them to go through at that point when I unblind myself at that condition, at that point the people that get passed and they use this for MLH, a lot of these other things fully automated process you cannot take the next step unless you can stand in front of a camera and explain to us everything as an individual and answer questions there's a huge difference, right? We're not using and this is something that I wish people would really pay attention to like it's not for replacing learning and no one, there's no rational human being who's going to chat or maybe something they need to know to exist safely in the world and I think the last thing if anyone is ever giving you a sort of preventative argument against technology that's usually approaching moralism and you can remind them that when the printing press came out a bunch of people said that books came from the double because children would have their faces in it all day this is no different and I don't have an answer yet so we will see and we will have to learn and adapt to the future, right? You get a t-shirt and you too, Sam Any other questions? Thank you very much for joining and please, stickers and t-shirts