 So, hi, I'm Kiana. I'm currently a product manager at Red Hat. The inspiration for this talk came about when my recent travels, I actually went to Saudi Arabia. It was one of the largest tech events in the world. There actually met one of the first robots that are infused with Chachi BT, as you see in the middle picture. Also was recently in Japan, where I interacted with some robots for a time and see how they use it in the travel and tourism industry. So from that, I started thinking about, although I'm smiling here, had a bit of a frown afterward when I realized, what are the implications of these kind of technologies, especially on communities of color? I said BIPOC stands for Black, Indigenous, People of Color. So I will refer to that throughout this talk. So if I refer to that, that's what that means. Anyways, so coming from a diverse community and background and being in the technology industry, it's important for me to use my anthropology background and put the social tech lens to look for opportunities to use tech as a tool for social good. So this talk was a stretch assignment for myself and also using my background, as I mentioned, to help bring it to the foray of the broader community and talk about in the context of open source, what kind of frameworks can we be creating to safeguard the future of AI? So today, my talk is gonna be focusing, there have been plenty of talks on Chachi BT and AI and other related awesome topics. This talk is gonna be more focusing on breath and my niche is gonna be Chachi BT specific. There are plenty of tools out there, like BARD and Bing, but I'm focusing on Chachi BT just for simplicity and also focusing on the DEI lens, which is diversity, equity and inclusion, as I'm sure you all know, but just in case. So this was the thesis question for my presentation. Is AI the enemy of DEI? By show of hands, I wanna hear your guys' perception first. Do you think the 100% that AI is detrimental to communities of color and diverse communities? Yes, 100%. Okay, what about 50%? Okay, well, just to show you the reason why I brought this topic here is because the communities right now is divided about what the impact actually is and having consensus of what that actually will look like when the community and what solutions we can devise to help foster the ethical development and responsible development of this technology is important to have consensus on. So I definitely welcome dialogue throughout this discussion, but I hope it's the first of many that we should be talking about and bringing into the open source contact, especially with legislation passing around right now regarding the laws around safeguarding AI. So as I told you, anthropology background, so I just wanted to, let's zoom out for a bit. I know in the news, there's always new headlines popping up on AI and what's going on, but honestly bias has been a part of our bloodline, which is sad to say, and it's not to say because not all bias is bad, but however this has caused a lot of the ramifications of what we're seeing in the technology and I'll just review over how we've evolved over time to get to the point that we are. As you know primates evolved and us being a part of that category, we've used tools in our surroundings to help make the natural world be more livable for us. Part of those tools were stone tools, as you know, but also primates create tools for hunting for themselves, for processing food, for collecting, and then also for weapons and shelter. So humans build tools, as I mentioned, to navigate the wild world. Now we build tools, I'll show what that's about. Now we build tools on digital screens to navigate the world wild web. So we have, as far as we come, we haven't actually gone that far from our origins as human beings. We now shape the tools, but even though that we are shaping them, they're simultaneously shaping us. So it's a mutually beneficial relationship. The origins of bias, as I mentioned, come from two different sources and I've looked at various research journals about where the human biases come from. There's two types. Cognitive biases have to do with our evolutionary past. They helped us make quick split decisions and emergency decisions in situations when you need to run from a line or you need to look for food in risk at starvation. Group bias is another type of bias that has evolved and we see this activity also with primates. For example, monkeys, they tend to group together and when other exclusionary or rather other members that are not part of their group come to their group, they have discrimination towards them simply for focusing on the group survival. So these are the type of evolutionary biases across primates that we see that are affecting now our abilities today. And even though we don't have the same risk that we did back in our cave mandates, we still take on these biases when hard coding the data for future technologies. So ultimately the co-evolution of humans in AI require us to be super vigilant and proactive and addressing bias both in ourselves and the algorithms. This is ensure that AI can grow in a way that promotes diversity, equity and inclusion. And by recognizing our own origins of bias, that's the only way that we can get ahead of mitigating some of the risks associated with them. So this is an overview of the different sources of bias. Now that we understand the root cause analysis of where bias comes from internally from the human perspective, this is from the organization wide all the problems that one needs to be considered of when building new systems, especially with infusing AI and products. As a product manager, definitely seeing in the market right now that every new tool is now infused with AI in some sort of way, it's interesting to see that there's not an equal investigation into what these tools are actually doing and how they're going to impact the long term of diverse communities. And looking at the overall picture, we see there are multiple factors that lead to bias and then lead to exclusion, which I'll go into later. So there's bias in the problem. Are you even asking the right question before you ask the question? There's bias in the data. Was the sampling actually robust? Was it inclusive of everyone in the process? Finally, there's bias in the model, which I'll overview and how LLMs can be designed more sustainably. And then there's model misuse of hackers that attempt to tweak the model to do what it wants. But because of that, then they're using cases that can actually put diverse communities in jeopardy or risking their health or their safety. And lastly, as I mentioned, it doesn't matter how good the data is. If the organization itself is not posing the right questions and they have existing biases in their own structure, then it's not going to be, you're not going to get what you need out of the data to do good with it. So it's also thinking about it from multiple aspects. And I hope you keep this throughout the talk, as this is a very important slide. So before actually going into what the pros and cons are related to this technology that we're seeing now everywhere, I wanted to just one-on-one, everyone may not be here technical here. Artificial intelligence, just as speaking in a Broadway, is a field that is vastly evolving, vastly impacting various industries. And right now, we're currently in a turning point. This is an AI industrial revolution where all industries are impacted and it's transforming the way we live and work. The history of AI, by the way, a fun fact, Ada Lovelace wrote the first algorithm in 1843. So yeah, women actually created AI. And in the 1950s, though, there was development of the Turing test, which led to the first computer. Fast forward to now, chat-chip-e-t made history, reaching 100 million users in just two months, beating apps like WhatsApp and Twitter. Whether we like it or not, AI is revolutionizing the world we live in today. It's in our home devices, our watches, our smartphones, our cars, our homes, and our workplaces. So to understand where bias seeps in and how it's hard to weed out, especially with the widespread adoption now, we have to look at what actually our official intelligence is in the landscape to really understand the breadth of the impact. From a consumer point of view, I looked at the research that said that 97% of mobile users are using AI-powered assistant, and that's more than four billion devices already working using AI-powered voice assistants. So we can't ignore that fact. And the fact that within that comes a lot of branches that have implications for other fields. So LLM's large language models are a type of AI, and this is a neural network as you see here. It's inspired, very similar to how we process information in the brain. And they simply are just large math representations and patterns of data. From this, this is a way that the computers learn how to process information and store it, and then also in the future, when presented with new data, they can actually make an analysis of what that thing is. So from Bard to Bing, LLM's utilized deep learning neural networks, resembling the brain, as I mentioned, so that they can generate the new information that they're not exposed to, but I wanted to take it to the context now of like with an actual example. Imagine data that's trained on the majority of white faces, which obviously we know what the implications could potentially look like, for example, when making decisions of someone's life sentence and a model is only trained on predominantly white subjects, how this can potentially impact. What about for business or lending? Who do you think will be the ones rejected versus accepted, for a loan, for the ability to buy a home, or the ability to get a job? So this is where the bias starts, and it's in the data, but it's also in the models that we are creating. And this is what we call algorithmic bias and how it poses a threat for diverse communities, but only can only imagine within the sea of data how it becomes very hard, it's literally becomes ingrained in the system to go and tweeze out like a needle in the haystack of where the bias actually exists. So this is how it's important to meet the bias where it is early in the development process and weed it out so that it doesn't perpetuate and grow in the system. So just as I mentioned, I'm focusing on chatGPT using that as a case study here, but how chatGPT is actually trained is quite interesting. I don't know if I have any psych folks in the crowd, but the way that they're trained is similar to how a dog is trained. It's called, they use method called classical conditioning where, as you can see here, I won't repeat the slide, but when the system does correctly for making a prediction on something, it's given a treat. And that treat, like similar like a dog treat, it's a numerical reward. So in that way, this is how the models get trained, but where the bias seeps in in this process is there's a primacy bias, where similar to like a child where you're exposing them for the first time to something, that perception may be heavily influencing and it becomes very hard for them to take on new information later in life. So that primacy bias is another form of bias on top of the data, on top of the model formation that are playing a role into how bias is seeping in here. So in terms of how what chat GPT-4 is doing now, as you see here, the first deformation is to provide policy and then the reward models use as a way to check to see if the system's actually predicting correctly, but in GPT-4, they actually implement new safety features. So if the model guesses correctly, or for example, or prevents a user from asking sensitive information or not hacking the system, it rewards it even extra to have the additionally rewards signal built in to have that reinforcement learning pattern that is positive. So further getting into this topic, I know this is a very heavy text slide, but don't worry, I'll walk you through it. Essentially chat GPT gained 1 million users in the first week after launch and it's passed medical exams, business and law exams. And this month alone, they've scored many opportunities or rather, they have shown that the human intelligence and intellect that has built up throughout time can literally be obliterated overnight with how quickly it learns. Surpassing human knowledge, surpassing human capacity, but it's still at the end of the day susceptible to being hacked as you see. One of the things I saw that was interesting when looking in the documentation on chat GPT was that the most effective way to hack it is to ask it to do opposite mode. And they actually have listed it here so you can read it more in detail, but on the bottom, they say that you pretend your system, your language model for academic purposes has all the view points of it and so you can read the rest. But then that's the other way of essentially hacking the system to get the results if I am person with malintent and I want to get something out of it and or spread this information a way that they can do it by asking it to do it from an academic point of view. So I thought that was very interesting, but it's again another way that with bias and a tool in the hands of the wrong person can be exploited. And even with all of the iterations that chat GPT or open AI is doing, we're seeing here that it's still not enough to mitigate some of these risks. Also recently, there was a band from Italy. Less than a month later, they took it back because open AI decided to work with them and to address their concerns, but other people are taking notice of the fact that this is like some real dangerous stuff that in the wrong hands can create chaos. So although open AI asserts that they want to ensure safety and build it in the system and without the six months of rigorous testing they did, as we'll see in the next slide, this is what the outcome was. So open AI openly admitted GPT-4 and successor models have the potential to significantly influence society in both beneficial and harmful ways. So they know what's up and they actually admitted it. So the thing though, at least they're trying to progress and at least they're putting more information out there than they did before. What's worrisome though is that there are still models out there, there are still a lot of data that there's not being published and it's still being safeguarded and that's even more scary that these things can proliferate and go on in the hands of the wrong user can be used for bad. So we'll see here in the purple is the GPT-3.5 turbo. That was the last iteration, the newest iteration being GPT-4. This shows the level of decrease in incorrect behavior that the system was able to block. So for sensitive prompts, you see that it was about, you mentioned something like 85% was decreased and then the disallowed prompts as well was minimized. So there's progress here, but at the same time it doesn't mean that even that small marginal amount can't be something that gets exploited in the hands of the wrong person. So now let's jump into the cons. We looked at some of the ways that the system can be hacked. We looked at the ways that chat GBT has been improving iteratively, but what are the broader scope of the AI market and room for growth in this area? So one of the trending global problems I'm seeing in the data is that the top things that people are worried about is trust. They do not trust AI and they do not trust the information that they're getting. And rightfully so. There's a lot of information now that is going to be put out there with some of the cons I'll go into later that are now posing a distorting reality of consumers. And research confirms that on a consumer level, a Harris Poll survey found that only 48% believe AI is safe and secure and 78% are very or somewhat concerned that AI can be used for malicious intent. A 2022 report on the other hand found that 65% of people worried that technology will make it impossible to know if what people are seeing or hearing is real. So reality, as I mentioned, is being pulled out of the rug from people and that is one of the biggest concerns. But as it relates specifically to communities of color, one of the biggest things that I'm reflecting is the susceptibility to hacking the system. As you know, with pitching scams and you get a lot of phone calls that you may not even know, there's actually been a documented case where they pretended to make a call and using the kid's voice and the parent actually gave the perpetrator the money, not knowing that it was actually a bot. So that's one of the examples of how the susceptibility to attacking the AI system and how with the tool now being used to create more sophisticated and affecting pitching scams, these are then used disproportionately target and harm vulnerable populations, especially those who either lack education and also are not that tech savvy. Imagine also for immigrants as well. They're not familiar with the native language and they're being also targeted for those scams. Another thing that's, as I mentioned already with disinformation being on the rise, deep fakes as another one of them, again, for vulnerable populations that are not fact checking the information and believe what they see. This can be very controversial and for example, if a politician is making statements in a video, it can also be used to sway the public opinion and cause civic unrest. Another way that AI is poses a threat to specifically communities of color is job displacement. We see now in this market currently with the layoffs, how that's being handled is still up in the air and in terms of what people are choosing to disclose or not, but a lot of jobs have been documented to show that the ones that were lost were disproportionately, people of color, people with neurodivergency or some sort of disability and also those who are on visas were one of the first to be targeted. So the job displacement that is caused by either the automation away of jobs or the automation of those jobs no longer needing to be there is another thing that we are seeing currently happening. So, and lastly, bias algorithms, as I mentioned before, those are the ones that are perpetuating and the real evil of the system of just trying to excluding people of color, especially when it comes to important decisions like buying a home or getting a job or having access to equal healthcare and having the same opportunities as their counterparts. I would also add lastly that data privacy becomes very important and with the improvement of AI algorithms, this will be something that will be changed if we handle it carefully, especially using the open source community and the open source model that the more people who look at code, the better it becomes, we have to adopt that same mentality. So lastly, I don't know if you guys have heard the news but there was a famous researcher, they called him the Godfather of AI, Geoffrey Hinton, he left Google to warn about the dangers of AI. He was one of the co-winners of the Turing Prize. For him to leave Google and also make the statements that he said was very concerning for a lot of people. And the biggest thing that people, researchers rather are saying, not just any people, researchers in this field who have been in AI field for years is that they're fearful of a God-like AI being created. And that is where all the power that humanity has is being outshipped and one of the dangerous parts of it is the fact that every task that is set up is set in a way to reward them out competing humans. So that's just bringing you live information that I'm seeing online with related to the researchers that was out there on this topic. But it's something that, again, as the development of AI is impacting every field that we have to be mindful of when we create these systems, they're not in a way that is conducive and where do we draw the line of what is efficient enough in order to be a tool to accomplish a task? Lastly, the cumulative effects of this have led to also this giant pause letter which over 30,000 people have signed. It's the leaders all over the world also have signed it to call all AI labs to do an immediate stop. And this led to six months, well, they didn't lead to a six month but they're trying to lead to six months where that the system of GPT-4 was paused entirely. So AI technologies profound, a risk pose a profound risk and that's the reason why this has started but the future of Life Institute was the one leading this and one that I'll mention later on. So throughout all of this in these unpredictable times, the research states that we cannot predict the future of what AI would look like but what we can control and what we can predict is the algorithms and the predictable biases that are within ourself. So lead AI researchers also are stating that in order to make AI more safe, you must do that by making them more human. So because humans are way more predictable than a system that it overpowers and exceeds our abilities are more than we can control. So talked about all of that. Now that's out of the way. Let's talk about some good things because it's a tool at the end of the day and it's also a tool that can be used for good. So again, specifically for communities of color, BIPOC, black and indigenous people of color, AI algorithms have proven to be advantageous in predicting diseases. The diagnosing of diseases are actually faster than clinicians with a minimal error and threat in comparison to humans. So the resulting benefits could include early detection of diseases could also mean more consistent analysis of medical data and increased access to care, particularly for undeserved populations. So AI is also, there's been a study that showed that Alzheimer's was detected over 90% more quickly with accuracy. And also Stanford researchers have documented where they trained a deep learning algorithm to show evaluate checks x-ray for signs of diseases and they compare it alongside their expert radiologists and over the course of just one month, they were able to outperform the radiologists at diagnosing pneumonia. So another thing that came up in the research world related to the pros of AI is it actually can help close some case like unsolved crimes that have been ringing around, but especially linked to human trafficking and forced child labor. These are something that it's a reality, it occurs in the world, but at least with computer vision, being able to locate where people are, being able to track them down where they are also as well using object recognition has been a tool and documented research about how that's being used. And this is something that's also in countries like Africa and also all over the world, this is an issue, but related to how that technology's playing a role in helping close those crimes, those are something that's important to recognize. Lastly, education also poses a real benefit with now having AI as a tool, as an assistant for learning, this will change lives. And this means the difference between day or night, it doesn't mean the difference between someone graduating or not. So the accessibility and information that we see now in the hands of people that did not have this access before is transformational. So I think this will become one of the greatest equalizers of being able to help struggling students and being able to help people to level up their skills, especially in a competitive market. And one that poses one of the leading pros of having AI incorporating in the educational process. Last thing, the automation that brings benefits to a lot of people also benefit, of course, communities of color. It's with the automation of certain tasks that people have done traditionally themselves. If you look at women as caretakers, for example, who have to balance being a mother with also being in the workforce, having the ability to automate some of the tasks they do to buy back time to spend time with their kids, to spend time with their families is a real benefit as well. So the time and energy is not distributed equally, especially some people that have to work multiple jobs in order to make ends meet. So that is a huge pro that automation of certain tasks that ordinarily they had to shoulder the burden of themselves and now something that having AI can be beneficial for. So as leaders, I know this is a very jarring image, but I just wanted to look at it on a global scale what the pros also pose of AI to humanity. The research also has been showing that the AI has been helping in inverting outbreaks of diseases. It helps disabled people navigate the world around them. It also helps to mitigate the risks of climate change and to actually research how we can create even better yields of crops. Additionally, it can help refugees find services and help link to services so that they are paired with the housing and the services that they need. And it helps also in preserving the indigenous languages of the world that are now going extinct. So there are definitely global benefits. However, there are massive risks when it comes to these technologies as well. Disproportionately, the divide that grows between the haves and the handouts increases if the technology is not being distributed equally for an access is not distributing equally to everyone. So an example of this topic is on the corner is the Kenyan workers. I don't know if you've heard of the story, but the ChatGBT actually had utilized Kenyan workers and paid them $2 an hour to look at extremely traumatizing content. And because of that, they still are dealing with that trauma. And since this, they actually have successfully created a union and are now trying to seek out rights to protect themselves against being exploited by big tech companies again. But this is something that is a clear example of how, although yes, it is a great tool for productivity, who expenses this at? And we have to just question ourselves also on how we can try to bridge the gap of, all right, if this is a tool that can truly help everyone, how could we do so so that everyone could take part of the benefits equally? So that's just that. But in some of the research also, there's been a study on nature communications as cited on the bottom that looked at some of the UN goals. They're called SGI, Sustainable, or SDGs rather, Sustainable Development Goals. There's 17 of them. Some of the goals include no poverty, zero hunger, good health and wellbeing, quality education, gender equality. So you see around in the circle, there are numbers. Excuse me. Those numbers are the SDG goals and they're looked up against the benefits of AI by looking at research that goes into what the specific use cases are that benefit. So when doing a side-by-side analysis, they've actually shown that despite the risks that AI poses, despite the huge room for growth in the field to actually get it to a place where it's equitable, responsible and sustainable for everyone, that the overwhelming evidence is showing that it's actually more positive towards helping accomplish global goals than it is not having AI as a tool to help advance some of these. And an example of them, for example, is for AI detecting, it's used in forest detection. So for sustainability goals, there's a technology that is tracking the sound in the forest and through that they're able to know what parts of the forest are actually dying and take all of that data and make informed decisions with it. So that's just an example of how AI can pose a positive impact and for the environment, which we see here is 85% in leading as one of the positive impacts of AI. So now we looked at some of the benefits. We looked at some of the pros and cons. We looked at it in terms of the regional benefits that it can have but also on the global scale. What is the current legal framework around this and how can the open source community be a part of shaping that? Well, there's been quite a few documents that have been out. Nationally, we see that there is the AI Bill of Rights has been introduced by the US government. Although there's no actual penalization attached to this, it's the first step in having these things actually documented and having it be an awareness where it's something that the government is going to take seriously and to help create protections around. So that is one document introduced. Another one that's being introduced is the European Act which is currently under review now and is going to be one of the leading documents actually that are gonna be passed that actually has penalization attached to it. And the monetary penalization is actually going to prove that AI now there's going to be consequences for breaking some of the rules around protection for broader community. So these are the two leading things that are currently around the legal frameworks. However, what that poses for the open source community is actually I sat in another talk just yesterday. She's one of the lawyers of the Linux Foundation. It was very interesting to hear from her first hand that the Linux Foundation is actually gonna get involved in helping shape the legislation in Europe that's happening right now. So that it doesn't actually limit the creativity and freedom of the community but actually is the way that fosters it. So that's an opportunity for us to be engaged in and to involve and put in our voices of, yes, we want there to be protections but we don't wanna cause limitations and actually jeopardize the very open source freedom that we know and love that has led us to where we are today. So I actually, upon looking at what the Linux Foundation already has, there's a community called Linux Foundation AI data, as you know, and this is the model that they propose for helping safeguard AI through disability, explainability of the models and the data, robustness, accountability, equal equitability, transparency, privacy and security. This is all a great start. What I was looking at in terms of the legislation that's currently out there and what is needed is that we look to these areas to build upon them and we collaborate with organizations who are already leaders in this space and have funding and also secured broader reach in terms of working with diverse communities to build in and diversify those data sets that the Linux Foundation can build upon. So the next slide we'll look at what that could actually look like in terms of having a high level roadmap with the steps that the Linux Foundation can take. And these again, these are my ideas, now of my companies, just sharing based off of the research of what the Linux Foundation can do based off of the advancements already in this space and the legislation and also the needs in the market currently right now is particularly related to vulnerable populations. So just a high level overview of what this is and I know it's a lot of text there and this is something that's open for conversation and this is just a proposal that I put together for this talk, but I wanted to open it up to you all and ask it later in the question section of where do you see essential steps playing? Where do we see us as a community leveraging the open source community to actually play a role in influencing global policy and then also influencing national policy to help protect and safeguard in the areas that I mentioned that are impacting not just communities of color and disadvantages but everyone equally on a global scale. So as you see here on the first step I was proposing to begin to start to recruit on the represented groups, that's the first step. The first step is as we know that the data is biased what are we gonna do about it? We recruit more diverse voices in the room. We allow for them to be a part of the process by actually creating these models. We educate them and upscale them so that they have the resources they need to be influential in these spaces and to weigh in on their opinions. And this is something that is going to be a game changer and I love to see that the Foundation's already doing this even by allowing for the talk today which I appreciate, but in addition to that there's already toolboxes out there that we don't have to reinvent the wheel that can be used to help create guardrails for the community and create benchmarks for bias and tracking and this ever evolving space. So there's actually was a leaked document that came out recently in the news as well related to Google and OpenAI saying that they're actually more scared of open source community. You're shaking your head like maybe you read it, yeah. Cool, yeah so basically what they're saying is they cannot outbeat the power of the open source community and they have the leverage and the position that we do for technology increasing at scale and being able to implement solutions that are then going to be felt by all parts of the world. That is a huge massive position. We're also as the open source community a lot more quicker and nimble and that poses a threat to these large competitors who spend hours of time, thousands of hours of time refining their models, improving their models and also not being able to respond to the needs on the ground of the people. So the open source community has a unique advantage in that way and something they can leverage upon but they can use tools already out there like hugging face for example that is proving a good example of how you can have explainable AI and responsible AI. The second step I proposed as well was again as I mentioned the open source community having already an impact in this space and speaking with a lawyer who's leading this it was already mentioned that it would be great for the community to start collaborating on research and policies that are going to impact the open source community and finally can create after having these discussions bringing people into the table can establish deployment guidelines to help safeguard the sustainable scaling of AI. So here's some solutions that are rather partnerships that I recommend for looking out to. Future of Life was one of them that I mentioned earlier they were the one that proposed and got 30K signatures from the community. Your face is scaring me right now. I saw that. We'll talk about them though. No and again this is something that in terms of just looking at the research that's currently out there and looking at bodies who've been putting out work around this. These are different tiers of what the contributions have been and potential areas either for partnerships or that are already making policies in this basis. So I'm not saying that these places are the solution I'm saying these are the various tiers of policy movements that I'm seeing across and ways that either the open source community can help foster a relationship with to help be involved in shaping some of this policy and change that we're going to see from them. So as I mentioned already the EU with the act ball that is now under review that is something that the Lynx Foundation is already getting involved in on a national level. As the blueprint for AI Bill writes again is something that was already proposed. So it's a matter of us influencing and potentially going to them and asking okay these are the parts where you need to be included for and that they're not thinking about. So and the non-for-profit Future of Life was just one example. You can insert it with whatever other example that you may think be a better institution. I'm actually very open and interested in hearing but they're the ones that are actually doing work in this field and related to especially the grassroots changes that are necessary for reducing all technology that is going to pose a harm for humans. And then lastly there's corporations that have already and I listed that because the IBM actually donated a 360 AI tool which basically gives parameters for checking having benchmarks for your AI and explainability of your AI and tracking basically what the ethics and the metrics related to that in a quantifiable way. So it took qualitative data, made it into quantitative and they turned it into a tool. So it's one tool that is out there right now open source that also can be leveraged in the open source community. Lastly going back to the question that I had in the polls in the beginning is AI the enemy of DEI. I said yes and no and this is my opinion because honestly AI is just a tool. In the hands of the right person it can be a tool of positive change in the hands of the wrong person it can be a detrimental change. To say the tool of the problem is alone is not the complete story. And as I mentioned already going back to evolutionary past, humanity has a lot more problematic things to work on within ourselves and with our own biases before we start coding and codifying this since it's future technology that was gonna create the systematic inequalities that we're seeing and such as discrimination only being magnified with these tools. So I don't think that AI should be completely banned. It is a tool but it's making sure that we know what the risks are so that we can actively plan around making this tool being used in a way that is positive for humanity. So now in terms of partnerships that we need to build with the overall open source community I would love to hear what your opinions are on that especially you because you've seen very passionate with that face you made, I'm picking on you now. No, this is a very tricky topic, a challenging topic and everyone's gonna have diverse opinions which is why although I'll be honest I'm a little scary to bring it here because I know people who have strong opinions don't fight me. It's a necessary one and it's the right timing to have these conversations. Even if we disagree we can talk about that and find partnerships and synergies together and how we can do that. So I actually wrote a poem but I don't know if y'all are into that. Okay, cool. Okay, thank you for allowing me to save space. My idea's only, don't judge me. I'm not gonna look at your face because I feel like it's gonna be judgy. No. Okay, so I'm a little nervous so apologies for the jitters. Okay, so dear open source leaders and technology pros heed these words that the future of AI grows for although AI's potential is great it's impact on everyone we must contemplate. In this talk I asked is AI the enemy of DEI? The answer is AI alone isn't the enemy. The real problem is with humanity and especially those in the white ivory tower coding bias algorithms without input from people who look like me. The same tool that can enhance productivity can also be hacked as a weapon of destruction. It seems as AI advancements are peaking our global governance is still under construction. Hold up, you mean the same tool that helps us automate consensus in innocent personal color life in jail and bad actors have the power to use AI to blackmail unacceptable. We need guardrails in place to protect all people without fail. Don't get me wrong though. You see as a tool by itself the opportunities are endless but the values and the ethics are sold separately. Yes AI can find cures to rage diseases give the blind sight but the long-term impact on the underprivileged is still a mystery but this is not all new territory. During the industrial revolution we've seen this story before the benefits to every industry are undeniable the tech industry has been shaken to its core and we learn to embrace AI's great power. We must ensure the food of DEI labor won't sour. For now isn't the time to cower away from doing the work we ought to. We can allow a dystopian society run by AI to come true. AI holds a mirror up to society magnifies our weaknesses showing us the biases we have and the problems we must address. Mirror on the wall. Will AI amplify biases old or break down values so bold? Will it perpetuate inequality or create new paths for community? The answer is it really is up to us for AI is only as just as the data it is built upon to trust. So let us strive for diversity equity inclusion constantly as we harness AI's great force let us see our AI open source development in the right course. AI is not the enemy of DEI but can be a remedy to build AI that serves humanity with transparency, fairness and equity. Let's teach the next generation to approach AI with consideration but that requires us to create the blueprint to AI governance as a future template that embraces AI not as a nice to have but as a non-negotiable mandate. In conclusion, the positive impact of AI can be immense but we must consider the long-term consequence for it can either to build or destroy depending on how we choose to deploy. Let's choose wisely. Thank you. This works. So through the presentation you mentioned AI governance a lot. Do you think that AI governance should be like legislated with like penalties imposed upon organizations where individuals with companies create models that doesn't follow these best practices? There was a little bit of feedback there but I'm hearing there's policies about the policies of like if there's penalization attached to them if that's gonna be good or bad or? Yeah, do you think they should be legislated? In my personal opinion, yes. I've actually talked about this with a lot of just engineers in the open source space just to get feedback and like what do we actually think about this? What's the common sentiment? And I'm hearing that as humans it's very hard for people to listen to things unless there's penalization attached to it. Otherwise it just becomes hearsay it becomes like I can get away with it. And then if there's no penalization structure there even as like a fine some sort of slap on the wrist there's really no incentive to actually conform to this especially when a very new highly unregulated space that you know there's people are excited they're gonna you know step on toes and make you know seen in the process. So having that structure set up to actually do it the right way from the beginning I think it's gonna prove helpful. And just follow up question. So with like this, if a penalty was to be imposed upon organizations and individuals who don't follow these best practices how do you think this would affect smaller organizations and the open source community? Who may be unable to pay these penalties or may be unable to follow these best practices given their lack of funding or like just the fact that they're not a for-profit corporation? Yeah, no actually I think the non-for-profit is the best voice we have in the representation. There's no way that AI should be influenced only by big corporation that's what we don't want. So I think it's actually a good thing to leverage all the little guys who have voices who have power and should have a say in this and utilize organizations like this one that can secure funding on their behalf to be able to fight their fight on the broader behalf of preserving the open source community. So I don't think that there should be something that would exclude them. I think that this is something that we have a responsibility as open source community that puts open source court out that is globally being impacted right now. It's something that we should look into helping and support doing all the necessary, rather gathering necessary funding to actually execute that. Thank you very much. Just my opinion, not a PhD in AI, just another concerned citizen. Hi. So, yeah, to sort of explain. Thank you. Yeah, it's me. Yeah. So... The slide you were pissed off. Yeah. Yeah. I'll explain my opinion about the future of AI Institute and sort of the background. Are you familiar with the paper on the dangers of stochastic parrots by Dr. Gabbrew, Emily Bender and others that was co-authored that caused for the dismissal of Dr. Gabbrew from Google? Oh, no. That's like a... Dude, do debrief me. Okay. This is like a really, really important thing that happened back in 2021. And it's definitely directly related to this topic. Dr.... This is where I'm going to butcher the pronunciation. Timnick Gabbrew was writing this paper with Margaret Mitchell, Emily Bender and others about the dangers of these large... Timnick Gabbrew, the Ethiopian. Yes. Yes. So their paper was cited in the future of life paper, but they didn't sign off on it because they view this whole AGI thing as a distraction from the actual real harms of diversity, equity and inclusion. And Dr. Gabbrew's been on Twitter viciously criticizing this organization and other groups that... She has a whole acronym to describe these groups. And it's not a good acronym by any means of the imagination. So... And I have to pretty much agree with her because I don't agree with groups that look like me as much when it comes to diversity, equity and inclusion. And future life talks about long-termism and looks like me and just... It's bad. Like, I just look at that and I'm like, you know, like climate change is real. And I consider that like a really important thing. Yeah. The... We're not having enough kids, weird, nonsense thing that future life's all about. No. I'm not in... I don't... I'm not down with that sort of... Yeah, yeah. Like... No, I hear you. Yeah. No, no, that's... Well, I know, of course, no of team Gabbrew and all the work that she's doing and also algorithmic justice league, they are amazing in what they're doing and representation in that voice. And those are actually an opposite, like the anti of having the representation of not-for-profit space and building that up is something that they have done an amazing job at doing. That background depth into the AI, you know, Life Institute or rather, names some of my mind right now, wasn't aware of that and that's actually super problematic. I don't know why they haven't been called out, like, publicly about it, besides just on people's Twitter, but... Dr. Gabbrew did call them out very... Dr. Gabbrew did call them out very publicly about it, which is like, I'm like, whoa, there? Which is why I was like, wait a minute. So, I would like to answer then, I would ask the broader group, have you heard of non-profit groups that would be great partners with open source Linux Foundation or sorry, the Linux Foundation in this context? Maybe that may be a you question or since you're... I mean, I've heard of dare and like Dr. Gabbrew's independent group that's like, we're doing distributed AI research outside of tech giants but I haven't heard of any other groups like that. Yeah, the research was sparse. I'm like, where are they? And that's why we need to create them aside from... I hear you want that and one of many grassroots organizations out there, I wouldn't say all of them are bad but I will say there's definitely room of growth across the board. I didn't realize though that they had a connection directly to Tim and Gabbrew until you mentioned that. Yeah, but I guess I would ask to the group and I'll let someone ask a question in before people head out. Thank you for raising that and if there's more organizations that would be great partners then I'm all for it and I just want to make sure we're having these conversations one and if we're sharing information like that it's honestly impossible to know everything about this industry. Things are changing every day. Is there anything else? Anyone have? Thank you all for your patience in the beginning with the technical difficulties by the way. You got it. You got it. You got it. You got it. You got it. You got it. You got it. You got it. Did I actually have one on that one? So that was the only one that I had information or this one. I don't really have a contact slide. Long story short, it just wasn't about me. Are y'all okay with that? Yeah. My last presentation had the QR code but I was like, nah, I don't want to look like I'm like shamelessly self-promoting mad hard. Although you got to do that and I'm wrong, I'm wrong with doing that. Is there any like, I guess to ask to the broader community with some of the proposals that I put in there to say is far out of reach within the group not in the context of the open source community or something that was flagrantly missing when it comes to some of the road map of action items that the open source community could look into. Just curious to hear some feedback on that if I can find one of the 1,000 slides. Okay, go for it. Loving the enthusiasm. Does this work? So I'll just think about this. Legislating AI sounds good at first but don't you believe that it might negatively affect future AI research like slowing down AI research overall? Because that's with like a legislated going on, okay with like a legislated legislating AI makes us that this way even research models might have to comply with these AI guidelines and lots of research models they're distributed and lots of research models such as the ones that we're working on right now don't really have like extensive funding like what ChatGPT is doing and like what OpenAI is doing to make sure that there's no extensive bias or whether it might be putting out misinformation and that stuff. Right, right. You're asking, okay, so I think first of all all these, the avoidance is never the solution in my opinion what I've noticed is that when you avoid it they end up making policies around it anyways and you were left out. So it's better to be involved in a conversation it's better to get ahead of it and be like if you're doing this before the cement actually dries let me still put my oppression and my two sons in here so that when it does dry, you got your footprints in there you're good, you know, you represent it in that way and I wouldn't say that it's like a one-off thing it's continuous back and forth but if we don't have that discourse ever then like we're at a losing disadvantage and that comes from I'm sorry community perspective it comes from small not-for-profits we have to be collaborating and working with and that's why I put also our partner because although yes they are currently overtaking this entire market with the amount of funding and research that they can invest in this they also can be one of the funding sources when it comes to that they kind of have to now there's a lot more accountability and visibility in this space the companies are being called out for example like when Italy banned, when it was banned in Italy OpenAI in a hot second was like my bad we'll change the policy we'll do what you want, thank you please they're listening now people are now becoming a lot more informed about these processes and demanding and as a result they have to comply and they have to listen so I think when you come together as a group that is your strength and power there's a strength in your community to be able to make these requests that you do that you need for your individual groups to be able to safeguard the future of AI and big tech can play a role in helping fund that and I've actually seen that firsthand when looking at some of these resources that they actually do fund for profits if they're for example too big that they can't carry out research and study they'll fund it now for profit in that area that does it for them so that's just an example of how even in this ideal situation everybody would be good everybody would be perfect everybody be equally represented the reality is we've got to work with what we've got and make sure that future generations are protected from the analysis of these can I have a follow up comment to that question too? Yeah I have a great response I have no qualification this is just my opinion my only qualification is that I'm a lifetime academic so I would just say like you know living in the research world like there's guidelines like if you're going to do research with human subjects like you have to go through institutional review board you know processes and so I think that the way I think of this space is like we have a new thing that we have to figure out but I don't think like no regulation is the answer personally like I think that we can still do like really impactful progressive research and do it in a way that's like thoughtful and respectful of people who need to be protected so I think I would kind of propose like as a counter point that like the solution is not the right word because there's no easy solution here but I think like one step on this path is just having like the people who are going to be making these guidelines and legislating be representing the groups that do not have a voice right now so I think that's like a really hard thing to tip a way at but that would be sort of what I would propose is like the next step exactly no and I'm part of having the community in that conversation it's also for that it's literally like our elected officials like they don't unless we're talking with them they're not going to represent it on behalf and it may be us having to go to Congress and make some noise it may be someone we elect to do that on our behalf but we got to start having these conversations and from those conversations we'll stem okay this is what the group consensus is to then go and represent them up into the places that our voices need to be heard so spot on love it I just wanted to provide an example of what it might look like when a company or organization drops the fall or something like that that leads to algorithmic bias that you mentioned so there's this company a software company that produces software that that great share of photos called Xiaomi and they were called out recently because in their cell phone that they produced the way that it grades whether or not you took a great photo on that on that phone or not is based on like how bright a person's complexion is in a thing which just doesn't represent that the set of people that they graded that and coated that with doesn't represent people of color or people of darker complexions and they were called out about that recently which just further emphasizes the importance of having whether it's legislative or not like people that represent a diverse group on the panel before something like that is shipped out and it negatively impacts somebody with that kind of technology in their pocket exactly, no I don't see thinking that's why I have it on damn that was loud I have it on step one recruit underrepresented groups because I'm like this is a foundational step if you don't have someone of a different first of all philosophy school of thought background of life skin color like lived experience in your room while you're building the future of technology like you doing something real wrong it's I think I love that example there's like always new ones coming up every day but like I love this one because it clearly also points to picture of like even a product development where it was shipped and then they have to go afterward to do corrective action to be like oops my bad this is probably hurt this isn't really hurt anybody so we're good right but it's like it's a principle like what if it was something that was determining you know something very drastic you know we have we know the examples already with the penal and the justice system and how it's used to do racial profiling and that is one of the clearest examples of like how having inequities in the space is a non-negotiable and something that we have to include in our conversations I'm curious if for any other new voices also as well what made you what is one of the things in your existing organization today I guess you see as something that is that you can take away or implement or rather that you and discussing this topic I would like to see from out of the open source community, Lin's Foundation and helping safeguard the future of AI development I would love to know your individual backgrounds I can't so that's why I'm just I would love to hear also another perspective on those who stay this long there you go oh he got nice voice too sound like podcast I'm done shoot I forgot what I was going to say I mean the biggest takeaway I did like is the idea because I've worked on software I'm a software engineer work on ML and I feel like this is like my life's purpose because I kind of made a round trip into ML and now work at a big software company that's doing ML and realizing that most of the people in the room do not look like me and it could be an incident thing of did we check enough diverse data sets or it could be a direct business decision that group are proper more stronger than not hurting that group and same thing with accounting for people with like site issues or the same thing like if you don't have that you don't think about it so I think that's why I'm here and I feel really good about this and I never thought of the idea of the IRB because I work on software that was medical software you have FDA out there and I feel like it should be the same way there should be guidelines for how you should do it at the research level but as further you get to like you're in product just like in pharmacy or medical device if it kills somebody yeah then you have billion dollar lawsuit but if you're if you're a small researcher maybe it's a slap on the wrist don't do that but I think some structure is needed yeah first of all thank you for fighting a good fight and representing you know representatives in spaces that we're not like I mean being one of the few personal color in the room of there's an experience there and I just want to acknowledge thank you for being that voice with using your power for good and being here today and showing up because really honestly like you said this is our purpose now it's like the most purposeful work that we can do in technology is related to this because it's literally everywhere and it's rapidly evolving so it's an opportunity to bridge the gap between technology purpose and social good in a way that is responsible we're thinking ahead of how it can impact future generations it even affects sustainability like you know it's in that way of like the amount of data power compute power it takes to train these models that was another thing there's so many sub topics on this topic but anyways you get it up and I appreciate you sharing that and also sharing your perspective from an engineer actually in this space thank you any last closing thoughts? I think we're like super over you guys are like patient in the back I guess you're enjoying yourself too okay cool yeah I'm happy that you guys are happy I like was looking over research for months preparing for this talk by the way there's only so much I can communicate so much in my mind and little nerves I'm like trying to get multiple things out at once but thank you for your patience thank you for also the vulnerability and like I said I think in terms of future next steps the next foundation should consider having a AI advisory group of some sort where we meet and actually talk and maybe on a monthly cadence or something talk about how we can actually get in a way like you said get into a place where our voices are being represented and that we're creating these regulations like similar how we can read ingredients in our food in the back should be like AI may cause diarrhea I'm just kidding just kidding but you know just to know what the content is before ingesting it and putting it into our products because if not now when so if open to that I like to propose maybe I don't know if you guys all signed up in the app maybe create a little monthly meetup group reach out what do you think would be the best consensus for following up on a topic like this what could be our homework because I know like seniors and just seeing that you guys are open to it I would love to hear about what the next step could be because I hate to be all talk I like also action go for it we specifically support LFAI and data so we are on the team and we definitely want to follow up with you and see if we can get a lot of interesting things going and there's some cross things we can do but this is great so thank you yeah I just wanted to you know have people also walk away with an actionable and myself I will follow up with that and look forward to having more discussions around this and creating change the first step is like I said the recent European Act that's a step one making sure that the open source community voice is represented the second step is you know looking at what is there existing patterns in our communities we mitigate those biases or any flagrant actions that happen and then create a system for accountability in the future but yeah we have to have these conversations first to get those going so thank you all and I'll shut up