 Hello everyone, welcome. I'm super excited to be here. It's my first time in OSS in Europe, so I don't know if there's any first timers in the crowd here too, but yes, I love to see it. We're going to have a great time and I hope that you all take something away from today's session. Before we get started, let's get to know each other a bit. Where are we all coming from? Just shout out. I'm from New York. Yes, Miami in the house. Hey, okay. Pakistan, okay. Okay, I love that. We have representation all over the world here. Okay, uno. Now I know who to hit up for all the Spain recommendations. He's in the back. Get him. He's local to Spain, but awesome. Great to see that there's diverse minds here today. Definitely need more of your opinions and making sure that we're doing this conversation around ethical AI, especially now more than ever. So now pivoting to the topic of ethical AI. By show of hands, who here cares about the impact of AI and current and future generations? I'm assuming your attendance says yes. Okay, cool. Well, you're in the right room, but I also encourage you to invite your colleagues who may be on the fence about what ethical AI means. May not understand the impacts, especially in marginalized communities. They are the ones that need to be here the most because often in these circles that we see echo chambers, we don't see a lot of representation and the contrary opinion of what are these conversations need to be happening in the first place. But you already here and that's the first step and congratulate yourself on that. Now let's get started. Welcome to Keanu's talk. As open source tech leaders, you have a unique opportunity and responsibility to ensure that AI is used in a way that promotes diversity, equity and inclusion. Today, Keanu Berry from Red Hat will examine how the rise of AI can create opportunities but also perpetuate existing biases and systemic inequalities. She will also explore how the open source community can address these challenges. Join us as we co-create solutions to promote responsible and ethical use of AI. Oh, that was my cue. I'm not even going to lie. I was ready for her to take over this whole talk. Anyone else have some anxiety around public speaking and wishes they could just outsource that to a digital clone? Yeah, definitely me. Well, we can do that now actually. There's technology out there that allows you to take a replica of yourself and actually now you can speak in any language and access markets and areas of the world that you wouldn't previously have access to because imagine the span of a human lifespan, how many languages you can learn versus how much your clone can learn. So it's really exciting, but it's also troubling to know that mass harm and mass disinformation can be propagated in the wrong hands. So this technology, as much as it is exciting, we have to also think of the opposite side about what does this mean for infringement on people's rights. We see this as an example of Hollywood right now where the likeness of actors are being replicated now to be used without actors' consent. This leads to unfair wages and issues like that that can also pose harms for those individuals as well. So a 22 report on trust and technology found that 65% of people worry technology make it impossible to know what people are seeing or hearing is real. I don't know if you all experienced that, but I share this example to show that just deep fakes alone is an issue that shows the red pill and blue pill situation of AI where we need to make sure that no matter what pill we take, each side of each reality is going to be one that is responsible and with ethics in mind. So a reflection on my why and the inspiration for this talk was the godmothers of AI who have been warning about AI. I'm not sure if any of you recognize any women in this picture. These are extremely powerful women in this space that have contributed to ethical AI and some have even risked their careers in the line. Timonette Gebrew being one who was famously out after her paper and LLM challenged a tech giant. And after that it just exposed the fact of how women need more protections, especially whistleblowers who are going to call accountability on issues around ethical AI. So I'm here just to play my small part as a technical product manager, as a woman of color. I see the potential to innovate, but I also see the potential for harm. So my goal here is to use this platform, God bless you, to advocate for marginalized communities. And I'm grateful for the Linux Foundation and Red Hat for making a safe space to have these tough yet important conversations. And I also just want to highlight that if you ever felt like you have imposter syndrome in these conversations and rooms around ethical AI, or just AI in general, know that every walk of life is needed in these spaces. It doesn't matter if you have a technical background or not. I came from anthropology and even more reason I should be here, because you're thinking about things from a human lens. So I think that just want to break down that silo of thinking that you have to come from a technical school of thought to have value and have opinion in these spaces. So why should you care? AI bias has a wide range of consequences for society, our economy, ecology, and planet. Then a privacy is one issue that affects everyone regardless of what your background is. It's something that we all have to face related to policies that are prioritized for profit rather than for actual security of the masses. And algorithmic bias is another issue that we have to deal with and affects everyone regardless of your background, simply because lower trust in the products that are put out there only means that these products will go unused, and products that are unused with low trust means that these are just causing sustainable implications for the environment with tools that nobody is actually using. Also on the socioeconomic and equality level, we have to make sure that we're involving everyone into these conversations, but also into the products that we're building for them. If not, not only people are not adopting these tools, but we don't really get the full range of the value of building technology truly that is for everyone. So this is the thesis from my talk, it's AI the enemy of DEI, but I'm curious to hear your opinion. Hands up if you believe AI is 100% the enemy of diversity, equity, and inclusion. How many think that it depends? Okay. How many think AI is not a fault at all? Okay, like that, rebellion. So I will answer that later throughout my talk, but I just wanted to get a poll check on what y'all think of this. So here's the agenda for today. My focus for this talk will be on breath, not depth. There's a lot to cover, so I will try my best to get through everything. But I will also try to have poll checks throughout. Do try to have your phones out. This is definitely not a talk where your phone can be down. I want you to be engaged and chiming in any way you can. So quick AI 101 for those who come from non-technical background. Think of AI as a stackable doll, where on the outermost doll, AI is an overarching field aiming to create intelligent machines. Machine learning is like the doll inside of that that is a subset of AI that focuses on developing the algorithm to make predictions based on data. And this can be either through statistical methods or other means. And deep learning, finally, the innermost doll is a subset of machine learning that employs deep neural networks for tasks that involve complex patterns and massive data sets. And we see that with things like LLMs. So now understanding AI in a societal level, we are currently at a turning point for the AI's renaissance. AI is transforming the way that we live and work similar to how it was in the Industrial Revolution. Fun fact, the history of AI began with a woman and her name was Ada Lovelace. She wrote the first algorithm in 1843. Another key turning point in the history of AI was when the Turing tests happened in the 1950s. Fast forward to recently, ChatGBT has now made history reaching 100 million users in just two months, meeting WhatsApp and Twitter. Whether we like it or not, AI is omnipresent. It's in our home devices, our smartphones, our cars, our homes, our workplaces, and even in some people's bodies. AI is a vast field, and I see kind of like a taxonomic branching tree where at the base is human intelligence that is being replicated by computer systems. And these intelligent human-like tasks include things like learning, reasoning, problem-solving perception, and natural language processing. So within the deep learning process that I mentioned earlier, LLMs are specialized in natural language understanding. From Bard to Bing, you may have interacted with LLMs and not have even noticed it. And really, they're resembling the brain process to help process large amounts of text data. These are the patterns and relationships that you see that are the lines that are intertwined, resemble the brain, and they help with predicting things like the next word or for generating new content. Think of it like in Google Auto Complete when a new word pops up. The problem with this, though, is that large language models like Chatchapati are now big enough that they've started to display startling, unpredictive behaviors. I don't know if anyone has heard, for example, where the Google chatbot taught self-bingali and they had no idea how that even happened. But there's just an example of like, imagine actually, you know, building something for a specific purpose and then it ends up doing something entirely, self-teaching, entirely different. That's kind of scary if you were to think of it in terms of the harms that can ensue from that. But this case, knowing another language, definitely on the positive. One checkpoint, though, on the topic of LLMs is that the bias introduction and model creation is like finding a needle in a haystack. It's very difficult to weed the bias out. Especially also with the weights that are introduced to the neural networks, these influence the outputs would be. This is various ways that bias can be introduced in the process. So further looking into how bias is introduced, let's think about it on a much larger picture. Studies show women's account for how much percentage of women do you actually think that is represented currently in the AI industry? If you were to guess. Start on number. Huh? It feels like that. Like shh. I heard 20. Okay. Well, I love that we loathe on it because we do need more representation, but it's actually 22%. Which I'm like, I need to go back and check that stack because I feel like it's lower. But I did actually check that the studies show that it's 22%, not enough. And it's no secret that the AI industry is currently dominated by, no offense to anyone in the crowd, white and Asian men, which can lead to lack of diversity and inclusivity in the development of AI technologies. There's no problem with the fact of recognizing the need that we have to grow and diversify the industry. But it's just, we need to be intentional about it. So now let's zoom all the way out and reflect on biases on a human level from the origins of the beginning of time. AI bias can be traced back to our ancestry as humans. Primates, for example, created tools that are an environment to make daily life easier, such as hunting, collecting, water weapons, and shelter. Humans build tools to navigate the wild world and now we build tools on digital screens to navigate the world wide web. As we shape our tools, our tools shape us in the process and AI is no different. The tool shapes the way we think, behave and interact and even show up in the world. So our evolutionary past is ripe with cognitive biases and these get baked into the technology that we build. One example of such biases is with group biases, which is exhibited by primates. These primates lead to discrimination that any new members to a particular group are actually like mean girl syndrome or excluded purposely for the safety of the group. This is a behavior exhibited by primates and just an example of how biases that had evolutionary purposes now find their way into how we show up in the world as humans and how we build technology. Now understanding our past and their evolutionary history and how LLMs can be one step of the process of how biases can be baked in, looking at a holistic picture and understanding how the different layers of bias manifest in production, we see a domino effect here of bias data leading to a time bomb that is only going to manifest itself later in the production when the user is actually interacting with this product. We want to make sure that we mitigate it in this process and have QA every step of the way from the problem formation, from the data to the model that we choose and even in the organization. These are just some of the ways that one can be mindful of. Another checkpoint for bias and training process is, I'm not sure if you heard of the teacher-student dynamic of what happens in the training model process. Do I have any engineers in here by the way? Oh, let's go. Okay, so if I'm saying something crazy, please do correct me. But I did research this heavily. But anyways, so, ooh, some clapping. So for bias and the training process, reinforcement learning is one way of helping train a supervised policy. If you see here, it's the chat GPT training. This is directly taken from their docs, by the way. So the way they do it is that you have a teacher's bot and your learner bot and just simplifying it. And what happens is that the algorithm's goal is to maximize reward for a particular outcome. So you say that the labeler, which is acting like a teacher, is demonstrating a desired output behavior. The student then goes and learns and the teacher tests against it to see how well it performs. After the student actually did it correctly, it's giving a reward. And instead of a good grade or a star, a sticker, a good job, the reward is a numerical reward. I found this interesting because this works very similar to psychological philosophy of classical conditioning, where similar to how we train our pets or where we train our children, we give them a reward and they do something well. So that the behavior is reinforced. And that's the same way that algorithms are learned to train and create policies that help mitigate against harms or also any desired outcome. So what about bias of the end user? We've looked at, you know, from the model process and from the LLMs process and then also from organization structure. But in the tools on the wrong hands, you're always going to have bad actors who jailbreak software from pitching to extreme anterrism. In the green, we see significant decrease in disallowed behavior in GPT-4 by 82%. And what this means is that with the GPT-4 model, it introduced a billion more parameters. And this meant that they were going to have more stringent rules around what was going to be allowed in terms of the prompt. Because before, you can actually reverse prompt engineering and ask for it to undo some of its training so that you can expose it to make a recipe for a bomb, make a biological weapon, use research journals to reverse engineer and make a disease on purpose. It was crazy what you can do with it, but now GPT-4 had corrected a lot of that, which I thought was quite interesting to see. However, I'm not sure if any of you heard recently, Stanford did a study and showed that GPT-4 actually is showing up to be dumber than it was before. So it's a trade-off that one has to have in balancing both bias mitigation but also actual functionality of the product. So now, I just want to hear about what's your depiction of ethical AI before I actually give the definition. So if everyone take out their phones and just get a QR code and let's make a little word cloud just to see. I think especially as a collective community, we all need to be continuously defining and questioning ourselves. What is our definition of ethical AI? It can constantly be changing, but as long as we strive to have one consistent goal of what that means for us, then it's something that we can work towards and create policies around. So while that's loading, and I'll let you do that, I will give it a second and move on to actually what I'll do is I'll present it at the end. So definitely just put in your ideas and then we'll look at it together at the end. But not to give you the, do think of it on your own. I'll give you actually one more second before just telling you. So ethical AI is artificial intelligence that adheres to well-defined ethical guidelines. But what people think is that there's actually legislation attached to it. There's not actually any legislation that has to adhere. Ethical AI rather is about fundamental values such as human rights and privacy, but it is not limited to what's permissible by the law. That's the key clarification I wanted to make. Now, understanding the ethics, what are the cases of AI ethics gone wrong? On a high level, any AI detection tools that are used to reproduce racism and class inequality leading to discriminatory outcomes and health to hiring, to lending, education and law enforcement are all indicative of technologies that are exploited for bad. The biggest cons with AI are trust, disinformation, data privacy, and weaponization of data to exacerbate inequities in human populations. I interviewed Chatcha B.T. to see its awareness on those flaws around targeting communities of color. And the output was pretty good. I asked in what ways could it be exploited to target communities of color. And the first thing I got was that it's susceptible to be hacking vulnerable individuals that are less tech savvy, including those who maybe not have English as their native language, elderly, young teens. Also, they admitted that deep fakes were also a problem, being especially those who are vulnerable and look to the news for information about politicians. This is something that can also be used as a tool to sway the public's opinion and call civil unrest as we've seen in, we've seen play out, I'm not going to say the name. Job Displacement was another one. Job Displacement can automate and disproportionate impact on lower income and less educated as we're all unaware of. And finally, I mentioned bias algorithms such as facial recognition algorithms that have been found to have higher error rates, particularly for those with darker skin tones. So I thought it was pretty solid response from Chatcha B.T. to have awareness of itself of how it can be exploited. I would add though that data privacy is also something that's extremely important and as a con and folks who are in cybersecurity, which I know you are, can attest to that as well. So now we looked at some of the high level reviews. Here's some actual recent headlines because I'm in New York area and I am involved in a lot of ethical AI groups there. I have two related to New York, in particular, robotic police dogs is something that is one of the most problematic introductions and what our taxpayer dollars are going to right now that are now patrolling the city. And imagine what that would mean for one, not having any observability into the data of what these dogs are being trained on and how they can weaponize and just target communities of color. Imagine just, you know, you're walking home alone at night and you just have this dog come up on you and, you know, yeah. It's just, it's not going to pan out well and it's not going to pan out well, especially for communities of color that are often disproportionately chosen when in between a lineup, between a fair skin tone person or a darker skin tone person, that's not going to fare well. So it was one of the most shocking introductions but also one of the, what happened recently in Detroit, Portia Woodruff, an eight-month pregnant woman, six cops showed up to arrest her and she was detained in a jail cell for 11 hours, all because an AI-driven software program mistakenly matched her to video footage of a carjacking. Imagine if that was like a family member of yours. That could have easily been a miscarriage and no excuse for the haphazard judgment of the police officers but it also just goes to show you how overestimation and overtrusting of AI is problematic in real time. Further than this, I did more digging and found that NYPD actually signed a contract for eight million with the AI company that will now monitor online behavior. So every social media postage you ever put will now be used against tracking and predicting your future behavior and projection of crimes in the future. So, you know, every selfie you ever took, I don't know, you may start looking like a suspect that's basically how they're thinking and how they're acting. And then the last one, starving, you can read on your own account. So yes, the biggest threat, though, that we're seeing, in addition to the ones that I've mentioned, are this bad AI that is going to lead to extension of the human race if we don't put necessary checks in place. We've heard from the godfather of AI, Godfrey Henson, for example, and he left in warning about the dangers of AI. But where do we draw the line in creating a tool or a monster? And this godlike AI is something that even led to something as the pause of the giant AI experiments of the open letter. I'm not sure if you've heard about it. 30,000 people signed up. Elon Musk was famously one of them, although his intentions were definitely dubious. You know, whether you're looking to regulate competition or actually you're genuinely caring about the societal impacts is up for debate. And since then, the letter has been heavily criticized. No action really has been sued from it, but instead there's this new letter that's going around that top CEOs of top tech companies have actually signed on in addition to the research labs that are funding it. So this one's looking more promising, but what it's going to sue from it is still up to debate. But it's one thing it's clear that big tech CEOs definitely want to get ahead of these regulation conversations for obvious reasons. And we want to make sure also that, especially as open source leaders, that we're included in these conversations as well and represented. So now we look at the negatives. Let's actually reflect on where are some of the good news? What can especially communities of color benefit from the societal impact around AI? Well, AI is a powerful tool. It helps bridge the gap in humanities knowledge and can lead to new discoveries and breakthroughs that have previously been inaccessible to humans, whether through lack of time, lack of capital, lack of resources. And now having the opportunity, for example, to analyze tons of data, like human trafficking, for example. One of the... Oops. For human trafficking, computer vision is utilized often to help detect through public cameras in various countries where individuals who are migrating are being moved to and especially for black women, even right here in the U.S., where disproportionately being trafficked, this is something that can really pose as a benefit for these women in tracking their whereabouts and being able to have a chance at rescuing them. But also, of course, in countries all over Africa and Asia, where we see human trafficking numbers at all time highs, even now more than ever, especially during the pandemic. But computer vision is one type of AI that can help, as I mentioned, in this regard. But also just looking back in a broader way, also education is something that we see now being a possibility with the accessibility of information, whereas people have to pay for a tutor, especially community of color. We cannot afford to hire a tutor. Being a tutor myself, I've seen how having information, having one on support can really mean the difference between someone graduating or failing and having a lifetime crippling debt. So this is definitely a huge game changer, having accessibility information, having a personal tutor to help communities of color uplift themselves and their families out of poverty and help it be a greatest equalizer. And also, the same way that productivity benefits all of us, it also benefits the community. And when you're looking at those, Black women, for example, are one of the largest entrepreneurial groups in the country, also one of the most highly educated, but often they don't have the time, energy, or the funding to be able to dedicate their time to their businesses, so a lot of them just end up going under in tanking. So a lot of things, I just came back from an event in Atlanta. It was about 20,000 people of color, entrepreneurs. The biggest interest that I saw was that there's definitely an interest in upscaling, learning about AI, and using it to leverage it in entrepreneurship. So that's one sign of hope. We're not just aware now of the biases that are there, but also trying to use it for our advantage as well, and we're not getting left out of the conversation. So that's for that. So more headlines on the positive. Some of the biggest breakthroughs that AI can pose is lung cancer detection. Lung cancer detection has been proven to be one of the more specific or accurate in helping relieve shortages, especially of healthcare professionals. Also, pneumonia detection has been another benefit in the healthcare aspect. Additionally, when looking at natural disasters, floods are the most common type of disaster, which 250 million people globally each year are affected and leading to 10 billion economic damages. We were just talking earlier about Pakistan and how there's like tons of flooding going on there. Now having AI be as a tool to help mitigate where these areas are going to be most affected so they can prepare and help save lives in real time by knowing which areas will be impacted. And then also using AI to prevent balance and low vision. There's a project called Be My Eyes that will help those who are low vision access and have vision through the pairing of individuals who do have sight. And that's just one of the ways that Google is innovating currently at the moment just to talk about some of the benefits as well. So now we looked at it from the domestic level. What about the global level? We should take a global view, especially as open source leaders, and think about which countries are not in the room often of these conversations. We often see the U.S. as a big player. We see the global south. The pros of AI and humanity in a global context range from helping avert outbreaks of diseases to helping the disabled navigate the world around them to yielding better crops. One of the sustainability benefits that AI has posed also is through the detection of sounds in the forest. You can actually track where there's going to be deforestation by hearing where the birds are. And so that's one of the ways that AI is helping through sustainability efforts. In addition to, as you see in the Indigenous women here, there's been a lot of benefits in helping store the amounts of data that often are passed through oral tradition and get lost when members of the family died. Now there's a way through AI to help track these Indigenous languages and also the culture and history surrounding this information. That's just an example of the benefits that AI can pose. However, we also have to look at the opposite and contrarian side where we see, unfortunately, I'm not sure if any of you all heard of the canyon workers that were used to train open AI's dirty data laundry. They were paid like cents in the dollar and were traumatized by a lot of the content that they saw. A lot of them had to, can't even afford to go to therapy, but as you see, there's a cost to AI and just being realistic about what that cost is and being aware on how we can make sure that we're advocating for those who don't have voices or who have less privilege than we do, especially in different countries. I looked at a study that looked at the negative and positive impacts of AI across the board, especially against the UN goals. Some of those UN goals are things like no poverty, zero hunger, good health and well-being, quality education. The good news is that the positive impacts actually outweigh the negative impacts against the UN goals that I mentioned, that this is good news for us, but it's also meaning that we still have a lot of work to do. Now that we know the good and bad of AI, how do we keep balance or regulate the scale in our favor and just in a favor of equilibrium? There's various equal frameworks now that are rising to the response of these threats posed by AI. I'm not sure if all of you have been following the EU acts, the AI acts that are going on. Definitely it's one of the most promising pieces of legislation, especially that it has penalization attached to it. But since then, actually on the open source front, the hugging face, good hub and creative comments, all published position paper on the supporting of open source and open science in the EU AI Act. So this is good because they help represent it from the open source perspective as well. You often we're seeing the EU AI Act be more on the side of favoring corporations and less more of the open source voice. That's why it's very important that we have our input and we're actively involving ourselves in those conversations to make sure that we're also representing the open source community needs. Also for updates on the US side, the US Biden Harris Administration secured a voluntary commitment from leading artificial intelligence companies to manage the risk posed by AI. One of them was one of them, I'm happy to report, but additionally to that, we also have an AI Bill of Rights in the White House that has been instated. But again, all of these things have not, at least from the US standpoint, do not have penalization attached to them. And you could only imagine, having all the rules in the world is not going to necessarily change behavior, especially if there's no slap on the wrist attached to it if they commit harms and there's no incentive to change the behavior. So we'll see how that poses for the US. But the EU is actually being a lot more brave about it in actually having the penalization and the finding attached if they were to break these harms. So, I don't know if any of you knew, but there's actually a community called Linux Foundation AI and Data. And we have that within the Linux Foundation and open source community. And when I did this talk in Vancouver, I was surprised that people didn't know and I'm like, okay, well, this is a great opportunity that we already have. And they actually have principles that they propose for trusted AI and these are some of them. I think this is a great starting point that we can build off of. But it definitely shows us the opportunity pie tortures the community to be involved in. And I think that we already have all these resources available to us. We just need to access them and keep this conversation going. So, similar to how governments regulate and scrutinize medical products, wow, this is a lot of words on the screen. My bad. We also have to scrutinize our AI products similar to the medical products and processes that are set in place. This is my proposal of what an AI governance roadmap would look like for the open source community based off of just the data consumed and some of the resources and research that I've been pouring over to see what can the open source community benefit most from. Feel free to look at it more, and we'll move on to the next slide. And we'd just like to also hear your thoughts on what you would propose for the open source community to resolve all the harms that I mentioned. How could we strike the balance between open source innovation, but also guardrails in place that help steward the ethical AI development. Also, another thing that I recommend is that I mentioned earlier in the talk that there's multiple ways of bioseeping in from the model creation to the organization level, now looking at it from a holistic view how we can actually have a resolution around embedding ethical AI processes in the open source model. We just leverage the frameworks that are existing out there, and we don't have to reinvent the wheel. I think we just leverage the partnerships similar to like as I mentioned on the global level, we can use the EU AI Act and help advocate and help structure that so that we don't get left out of the conversation. On a U.S. level we can leverage some of the blueprint to have a bill of rights and build off of that. On a nonprofit level, companies like I mentioned, PyTorch, Hugging Face are all great communities to start from. Mozilla AI as well are great nonprofits that are seriously investing in this area and actually making sure that we're stewarding ethical AI development and putting money with our mouths which is the most important thing in order to steward a good practice as an AI. On a corporate level also, IBM is doing a lot in this space and in 2016 they've helped with also NASA helping democratize this information of research that was previously inaccessible and making it in a way that everyone can access. I'm curious though, last question if everyone take, what does AI solutions look like to you? We just take out your phone and just rate on a scale. I had some of them already listed for you. Just to hear about what the community thinks is the best approach for having balance with an open source community and I'll get to that later. All in all tech requires a global approach and a global lens. So bring it back to the original question. Is AI the enemy of the DEI? Yes and no. In the context of DEI, AI can be the enemy if it perpetuates existing biases of systemic inequalities such as discrimination against people of color women and other marginalized groups but ultimately humans are responsible for the upkeep of the open source tools we put out into the world. I've traveled to countries like Taiwan, Saudi Arabia and Japan recently which exposed me first hand to the world of AI and robotics further feeling my curiosity and my also anxiety around the rising advances in AI's impact on BIPOC communities. In Saudi Arabia they regard data as a new oil and Japan they're utilizing AI processes to now run whole governments in town and Taiwan as you know already this semiconductor is a huge investment and now something that the US is to compete with. So regardless of whether we want to be we need to all reflect on if we're going to be the leaders of something let's do it the right way and let's make sure that we're stirring it with the open source community helping lead us in the ethical direction. So now I have a poem at the end but I'm curious if you want to hear it. Yes I wrote it. It was not chat chibi thank you for reminding me to say that because yeah now people yeah um I just ask if anyone is to record that please attack me but um this is yeah I just wanted to have a representation I think people coming from a technical background we typically hear people talking but I also like to express things in another creative way that maybe people from the arts background English background can also relate with so alright um here we go dear open source leaders and technology pros heed these words as the future of AI grows for all those AI's potential is great its impact on humanity we must contemplate is AI the enemy of DEI spoiler alert AI alone isn't the enemy the real culprit is humanity and especially those in the white ivory tower coding dangerous algorithms without input from people who look like me the same tool that can enhance productivity can also be hacked as a weapon of destruction it seems as AI advancements are peaking our global governance is still under construction hold up you mean the same tool that can help us automate an auto sentence in innocent personal color to jail and bad actors can abuse AI to blackmail the innocent at scale unacceptable we need guardrails to protect all people without fail don't get me wrong though you see the tool by itself the opportunities are endless but the values and ethics are sold separately yes AI can detect rare diseases aid the blind sight but the long term impact on the underprivileged remains a mystery yet history is repeating itself from the renaissance of the industrial revolution we've seen this story before AI's impact on the world is undeniable every industry will be shaken to its core but as we learn to channel technology disruptive power we must ensure the fruit of our DEI labor won't sour for now isn't the time to cower away from doing the work we ought to we cannot allow this topian society run by to come true AI holds up a mirror to society magnifies our weakness showing us the biases we have and the problems we must address mirror mirror on the wall will AI amplify biases old or will it break down barriers so bold will it perpetuate inequality or create new paths for community the answer is it's up to all of us for AI's only as just as the data is built upon to trust let today only be the beginning of the responsible AI conversation for ethical AI is more than just a buzzword but a life long obligation I'm here to remind us as we harness AI's great force let us see the development of AI open source on the right course for AI alone is not the enemy but if we play our cards right it can be the remedy to build tech that benefits all humanity with transparency fairness and equity let's teach the next generation to approach tech with consideration but on our part that requires deep contemplation as a community let's create the blueprint for AI governance as a future template that embraces the AI that is a nice to have but as a non negotiable mandate in conclusion yes the positive impact of AI can be immense but we must consider the long term consequence where it can either build or destroy depending on what we build when and how we choose to deploy let's build wisely thank you questions do you guys normally get poems after this huh would you say go for it did y'all like the poem okay it's a little quiet here I'll tell you I swear that was not chat GBT I like poetry yeah no one has to answer for the alert but I do think that the only way that we're going to know is by having at least a template a baseline measure where we can start from and then we can grow to adjust as we go I don't think we're ever going to have get it right but if we wait too long and have no policies and no we're just going to have a wild west we're continuously the people that get harmed are often going to be those who are already marginalized so how could we not exacerbate that and set that in place by having looking at history in the past and then making sure that we're learning from it and learning even other countries have implemented it further than us and then we can start with something and at least use it as a model for going forward and adjust as we go that's the only thing I can think of but I'm not an A.I. ethicist or PhD this topic is just enthusiast but great question I start your hand up yes well explain that it's necessary I think we all know that it's necessary to have some sort of some sort of measures against the jail breaks because people creating recipe for bombs and phishing attacks and stuff is a no-brainer but I get your point about how do we balance how do we balance the you know the ethics and I pose that question in that slide I have a slide up about some of the jailbreak prompts that people used to be able to do and now what GPT for what the response would be I think again there's no easy answer to how we can strike the balance just something that we have to do but I think that if we were to do it we'd have to definitely beta test it and I know that red teaming is something that internal corporations or companies have that they can experiment before they release these things to the public I think that would be a great case study to learn from internally to see pretend to be an exploiter of the technology how would you use it how can I guess and test and sample it where I finally get to happy medium a recipe for having the technology that is being used to be able to do things while not hindering the performance so badly that the technology becomes just useless at that point so I think you start internal and then bring that external and then I see you laugh there it's I thought it was funny as well but also scary that people actually do this and I don't know if y'all heard about worm GPT and all that stuff it's so dumb because it's getting safer I was like come on guys and that's why I called it out because I was just realizing that that's like hmm this is no longer being useful anymore but I think we're learning, we're definitely getting pigs right now this is a large experiment and I think things will get better but before they get so good that they're scarily good and they can be exploited we have to make sure we have those guardrails in place to not let it run wild any other questions did y'all enjoy the talk, did y'all have any feedback mm-hmm when you say senior do you mean like senior as an age group or senior in a company oh okay let me think about grandparents um yeah I've definitely been like come on abuelo let's go like you need to see your life um no I think it's one if they want to do it then you know that that helps in terms of making it receptive to it um I know for example even with my mom I tried to expose her to it no I don't want any of my information on the internet it's either like you get super paranoid school of thought or so I show my um my older aunt and she yes teach me everything about it I want to put everything in there and I'm like okay like you know I was like okay I noticed that um don't even get having more skepticism around what you put on the internet is probably for the best because um a lot of people who are targeted are often those who are old and um a lonelier and maybe don't have and for a lot of free time in their hands and you know want someone to interact with which I think you know having the AI chat bot for loneliness features are like the psychological um benefits of having kind of like therapy more accessible for them is good but in terms of forcing it on them I don't I don't think it's necessary I think more so just like thinking about what your goal and your objective is it's like you want to introduce them to show them like oh this is the cool stuff out there then you know show it through action like hey here's what I built using chat gbt and then oh okay naturally without that will stir an interest and then I think that's how we should um approach that even with like the same my younger siblings I'm like you guys need to know about this and we're like we already do we're using it in school and I'm like no no they're good students but they also hear about it you know and immediately starts the bands but it's good to just know and expose everyone around you to what's happening so that we don't get left out of conversation I think the biggest thing is that we cannot have this mentality of fear let us stop and engage us from actually exploring I think the best thing you can do right now is explore know what's out there we can't fear and fixate on we're being replaced oh my god my job's gonna be gone if you're a part of it if you're using it you know how to 10x your workflow with it and uh you know how to like do sophisticated prompts around it it's like you either kind of go with the tide or get the role that's how I'm seeing it right now so I think that's the best thing you can do is introduce sorry about the long-windedness but hopefully that answered your question go for it and thank you for coming guys by the way enjoy your day oh Italy you the country that banned open ai boss moves let's go I was like okay I see you okay so to summarize that how could you remove data from the model yeah I wish I knew too because there's some stuff in there probably we all would want to take back um not in all seriousness that's a really good question um it's not one that I feel positioned to answer but if I were to my best guess would be that um this is part of the reason why policy is so important if you force them to give up the data and force them to give us our rights to opt in in the first place then you know instead of like retroactively like let me get my data back when it's it's almost kind of too late so to speak to heart go back and like track and label what was yours and imagine all the creators and all the lawsuits that are going on right now even for photographers creators that now have to fight it's a mess right now so it's a very interesting topic especially in a legal standpoint looking from creators I think it's very hard to know to find the seed of data your stuff and I just think having better privacy rights which I alluded to earlier about privacy is going to help with that and doing it beforehand rather than after the fact is the best way to approach that in terms of like Italy's model I'm not as familiar with that and I think definitely Europe is a lot more progressive and even with GDPR and everything just how they're moving is definitely different than the US US are like figure it out your loss but I think yeah that would be my best guess at that last closing thoughts feedback anything and thanks for coming did you want to say something okay sure sure well thank you everyone appreciate it