 Thank you everyone. So yeah, my name is Bablu Singh and I work with CEDA as a data scientist which is Ireland's research centre for Applied AI and today I have come here to talk about a topic that I really feel it is close to my heart. It is the ethics in the generative AI space. I think that we as developers sit behind computer screens trying to build products that can have massive impact on the society and this might lead us sometimes to focus a lot on improving the accuracy of the algorithms or trying to optimize them for achieving better results and we might tend to overlook the ethical side of the technology. Speaking of technology, I think generative AI is the next big thing and it has already shaken the world because it is able to do things which we were not expecting to happen this soon and I believe that it is an exciting time to be alive because we are witnessing a revolution in the history of technology. So let me just play a small clip for you guys to give you an idea of what generative AI is actually about. With code as its canvas and data as its hue, generative AI brings forth something completely new. Unseen patterns emerge like morning dew and ethical perspective we're here to pursue. Amazing right? So I created this video entirely by using generative AI and I did not write even a single line of code to create this. So how did I do it? First, I opened chat GPT and I said, hey, I'm going to present at EuroPython. My session is titled like this. Can you generate an intro for me and make sure it's poetic and short? So it came up with this text. I then converted this text into the speech using this tool called 11 labs. So I selected a voice that was artificially generated in an Indian accent. I then created my AI avatar using the tool called Lensa AI. It asked for some selfies from me and it then created this avatar. And finally I combined all of this together using a video tool called DID. It's an AI powered video platform. So this is what generative AI is all about and today we are going to look at how at the ways in which it can impact our society. So let's start with the businesses. I think there are two words, one that is outside our screens and one that is inside our screens. And we often focus a lot on the platform from where we consume a lot of content and this content I think will be soon powered by generative AI for having an identity in the virtual world. It is required to have a social media account and generative AI can not just only create images, audios and videos. It can also create and generate code. A lot of my friends are into software engineering and they are already using chat GPT for debugging their code, for asking feedback on some piece of code and even for automating certain piece of repetitive task. And I think you will all agree with me when I say that every online business needs to have an excellent customer support service. We already have chat bots that can handle a great amount of queries on itself but if they are powered by generative AI, I think the response time is going to come down drastically and the conflict resolution rate is going to go up. And in marketing I think it is very important to send the right message to the right person at the right time. And with the help of generative AI, it will be possible to create highly targeted personalized marketing campaigns. And finally in education, I believe generative AI is going to be a game changer because it can act as a personalized tutor and the best part is that students can ask queries round the clock anytime and no judgments will be passed no matter how silly the question is. So these are some of the ways but I think there are many more. And according to a report by McKenzie, generative AI has a potential to create up to 4.4 trillion dollar business in upcoming years. And that's why I say that it's the next big thing. But with every technology, there is some risk associated with it. In this case, I have bifurcated the total risk into two subcategories, the known risk and the unknown risk. Under the known risk, we have things which we know that they can go wrong with the current state of our generative AI. And in the unknown risk category, we have things that we have still not figured out because the technology is still evolving and there are still so many questions that are there for us to figure out. So let's first look at the known risk and I have further bifurcated it into five categories and we will be looking at each of them individually. So the first one is misinformation and deep breaks. So these models are predictive in nature. The large language models are trained in a way that they can predict the next best word in a sentence and the information which is produced by these models might not always be true, although it might seem that it is true. So for example, this incident happened in Manhattan where a district court church fined two lawyers $5,000 for submitting fake cases in a legal court filing. And the lawyer said that we made a good fake mistake in failing to believe that a piece of technology could be making up cases. So this is what it can happen. And let's not forget, it is the same chart GPT that passed the law school exam. So this is the irony. Now let's look at deep picks. So what are deep picks? Images, videos and audios that look real but are actually fake. Sometimes they are created for fun purposes, but other times the intentions behind these videos and deep fakes might be malicious. So a recent scam came out where the scammers use artificially generated voices pretending to be family and friends in distress and asking for money. And a lot of people lost thousands of dollars in this. And a similar scam came out where a deep fake was circulated on Facebook. It was deep fake of Martin Lewis, who is a UK based advisor in finance. And the video was asking people to generate, the video was asking people to invest in an opportunity which was backed by Elon Musk. But in reality, no such opportunity existed and people again ended up losing thousands of dollars. Moving on, we have next risk as security and privacy. So the recent developments in the space of generative AI has created a rat race where people are building products and they're just deploying them out there without realizing the security concerns. And it is for you to decide whether you want to, how safely you want to use these platforms. So this incident happened where Samsung employees accidentally leaked some confidential information to charge GBT. How did this happen? So an employee was asking a feedback over some piece of code and he pasted the confidential code to the charge GBT. Now, as per the as per the policies of open AI, the information that is sent to charge GBT is retained for training purposes. And unless you opt out from it, the data remains with open AI. So it is very important to not use any confidential information while using such platforms. Further, these models hold a risk to data privacy. They are trained from a large amount of data that is available on internet and many lawyers, many authors and many artists are now suing open AI because they feel that their copyright issues have been violated and that the open AI has stealing their content. Moving on to the next risk, which is the bias and the stereotypes. So these models are trained from data that is largely available on internet. Now the data might be racist, sexist and anti feminist and it might also be carrying some other types of stereotypes. The old saying in the machine learning world, garbage in and garbage out still holds true. A study was conducted by Bloomberg on a tool called stable diffusion that generate images from text inputs. So some 5100 images were generated across 14 different job titles and it was found that high paying jobs were linked to lighter skin tones. So high paying jobs such as lawyers, doctors, architects, CEOs and the lower paying jobs such as genitors, dishwashers, fast food workers, they were dominated by skin tones, darker skin tones. And not only this, if there was some gender bias also. So for images like with the job titles like teachers, social workers, housekeepers, they were mostly dominated by women. And moving on to our next risk, which is the environmental impact. So large language models such as chat GPT are huge. They have billions of parameters and their ability to generate text come from this, from their size. So big size means bigger computation and which comes at the cost of environment. According to a research, the amount of carbon dioxide that was emitted to train GPT three is equivalent to carbon dioxide emissions of five cars in a lifetime. And that's only one model. So not only the carbon footprint of these models is high, but Sue is the water footprint. So charge GPT requires 500 ml of water to run a conversation of 20 to 50 messages. And that is for a single user. So imagine billions of users using it at one time. So this number scales up to very high. So not only the training process, but even the inference is costly. And next move on to the explainability part. So over here, we can see that we give some input and then it goes to some black box and we get the output for a wider adoption of generative AI. It needs to be explainable. The current state of AI is black box. Now we understand the architecture that we are using and we can reproduce the same output by crunching some numbers. But these models show emergent capabilities and no one knows at what size which capability might emerge. And we don't know why a specific image is coming when we are inputting some, some text. So these models are still a black box and generative AI needs to be explainable for its adoption in sectors like healthcare, finance, etc. Moving on to our unknown risk category. So the unknown risk category raises important questions about the effects of generative AI on the society. As this technology becomes more and more sophisticated, it is crucial for us to know what impact it can have and what potential consequences we might face in future. So we don't know what are the long-term effects of using generative AI and we don't know how humans will coexist with another intelligence and what that society will look like. There is also a fear that some jobs might be replaced by generative AI. And I think to an extent it is true, but we have always seen technology creating new jobs. So I believe that it will be a different landscape. We will have a new employment landscape. We will have different types of jobs and it will be interesting to see how we transform our skills to the new employment landscape. Who is responsible when something goes wrong? So determining responsibility of generative AI is a complex issue. Multiple stakeholders are involved and they play a crucial role. There are developers, there are organizations and there are policy makers. So developers hold the responsibility for ensuring the ethical development and deployment of such systems and they need to make sure that these systems safeguard the, safeguard the people from risk and the biases. And then it's the responsibility of organization to establish guidelines and regulations to govern the use of generative AI. And finally, policy makers play a crucial role in creating comprehensive frameworks that addresses challenges that are posed by generative AI. And there is this one more interesting question which is becoming really popular these days. Will we be becoming emotionally dependent on AI? We don't know. So we are having conversation with AI assistant tools like they are our friends or they are our colleagues, but emotional dependence is an area of concern. It might help us. It might make us feel like we have a companion, but we don't know if it has any negative effects and there needs to be a perfect balance between human and human interaction and AI assistance. And we need to strike a perfect relationship with this technology. And this one is my favorite. Are we going to lose our skills? So writing, painting, creating things, this is what makes us human. And if AI is going to do this, then what will we do? It is, however, I feel that it is unlikely that AI would completely eliminate these abilities from us. And in such a scenario, it will be crucial for us to bring forth the abilities that set us apart from AI. So AI can't bring the personal experiences into the artwork. It cannot replicate a certain creative style. So it is important to foster our creativity and look on the side which makes us human. And there are many more unknown questions that we still don't know and that we still don't know how we will be addressing. But the, so like I have delivered so many bad news to you today. So now is the time to deliver some good news. I think that the global landscape surrounding AI regulation is witnessing developments in various regions. And there will be a unique approach that is being adopted by different nations to address the challenges that are presented by this technology. In United States, the national strategy for information technology, which is the NSIT framework has been released for guidelines and self-regulation of within the industries. So this framework ensures that the privacy of the individuals is respected while encouraging the development of artificial intelligence. Moreover, the states like California and New York, they are coming up with their own laws. So the laws enforced by the states plus the national framework together will promote the development of technology responsibility and it will protect individuals' rights and interests. So the EU AI Act has emerged is like the frontrunner in this field. And I think it is likely to become a global standard. So the act not only highlights the permissible and banned use cases of AI, but it also mandates organizations to conduct risk assessments before implementing new AI systems. Within this act, EU aims to strike a balance between fostering innovation and state safeguarding fundamental rights of individuals. And China has also made significant investments in AI development and has recognized its potential impact on the society while the countries primarily target organization. So the country is trying to target the organizations that are producing products based on generative AI, but it is not targeting the generative AI. So you feel free to use it for research purposes, but when you are using it for commercial purposes, you need to be more careful. So this is an interesting thing. And I think it will be very important for China to bring laws that safeguard its country. And moreover, these laws are implemented in a way that the socialist values of the country are not hindered. So that's all from my side. And if you want to know about the work that we do with CEDA, please check this website. And if you want to connect with me, you can scan this QR code where I talk on Instagram about AI. So that's all from my side. Thank you. Thank you very much. And as always, if you have any questions, please come to one of the microphones. So if I may, first of all, thanks for the talk. It was great. I'm just wondering about the research in this area because someone yesterday at the keynote mentioned Max Degmark and his book Life 30. And one of the things he mentions in the book is that the funding when it comes to AI safety is not great. So I'm just wondering what's your opinion on this? Has the situation improved? Is there research? Is there any funding on this topic? Yes. So I think that what I feel is that currently there is a centralization of power in the sector. So all this big tech have the resources, they have the data, they have the money to run these big GPU farms and make these technologies. But on the other hand, the open source technologies are also being funded. And there are some great initiatives happening. So it's by hugging face and blooming. There's this one tool, something blooming. I can't recall the name right now. But yeah, a lot of things are happening. And I think it is important to have an awareness first. So if we have awareness and if we have more people, then I think more people will come and work towards the security and privacy issues and the other issues as well. Okay, thank you. Okay. One question regarding the environmental impact you were presenting. Where did you get the data regarding the electricity consumption and water consumption from? Like what is the base on that? Yes. So I have included the slides, the resources, like this was some research conducted by researchers at some premier institutes. And I have included the resources in the slides. So you can check them out. I can share the slides with you later on after this session. Concerning mainly image generating AI, but also kind of all AI, it references a lot of its data from the internet as a source. So such as social media for image generative AI. Once it picks up in popularity, will this AI start referencing other AI generated images and producing its own accuracy by referencing inaccurate data, do you think? Sorry, could you please explain the elaborated one? So say we ask an AI to generate as an Apple and it generates as a not very accurate Apple, but then... This Apple or the other Apple? Well, it searches the internet for what an Apple looks like, but then it pulls a lot of data from AI, which have generated not perfect Apples. Do you think as we fill the internet with AI generated images, the AI will become less able to generate accurate images? Yeah, I think that it might create certain problems because like there are certain pictures, like when you create pictures of humans, you see disabled figures, like there are four fingers in hands and some disemputed arms and some other things like it can never get right with the arms and the limbs. So, yes, there's this problem and definitely the saying that the garbage in garbage out remains true. So if you feed in these kinds of images, it is going to generate these kinds of images. If we fill the internet with more garbage from the AI, though, is there anything checking that we're not entering this into our data? So there's this term called AI polluted, like these images which are generated by being called as AI polluted and there are certain tools. So I know that Google is working on creating a tool and I think it is already out there to check if the video is AI created or specifically the image, if the image is AI created or if it is true image, like the real image. So yes, people are working and I think there are tools out there to look on this. Thank you. Thank you. Hi, thank you. When you were talking about unknown risks, you mentioned that we'll have to see how we will deal with another intelligence. Do you consider generative AI intelligence and if so, what are the ethical implications of that? See, well, I personally don't think that it is intelligent enough because we understand how it can impact our society and where it can go wrong and we can make it produce certain outputs which might not be intelligent enough. But then I also presented examples where people who are not technically linked to this field might feel that this is very true. So the example with the lawyers, they failed to see that it's and they are lawyers. So they know that these are the real cases and these are not. So it is intelligent enough, but not that intelligent yet. Thank you. Thank you. In case there are no other questions, let's thank the speaker again.