 Thanks for staying with us. It's time now to deal with our second hot topic. And this one is combating the spread of AI altered images and deep fakes online. And to do that, we have joining us today, a digital product manager with Microsoft. Good morning and welcome to the program. Good morning. Good morning. Nice to have you here. Yeah, same here. Microsoft, that means everything is having a soft life now. Okay, welcome to the program. Now, I don't know whether to say it's worrisome or it's dangerous or I don't know how to put it, but AI is something that is supposed to give us a new lease of life, to make our life, like I said, soft enough in all aspects. But things are happening now that are making us fear that maybe it's a very wrong move that humanity is making, bringing AI. So let's start to have just, first of all, a preamble of what we really mean when we talk about AI and some of the advantages of having them before we begin to talk about what we are afraid of. Okay, thanks for having me once again. We are in the era of AI and AI is not new. AI has been with us like maybe 50, 60 years. It's just a way of its artificial intelligence, so transferring human intelligence onto the machine. And we've been trying to build a machine that can at least be able to do what humans can do and even do it better. Because let's face it, there are some things that are better done with a machine. Things that are repetitive, for example. So humans, we love innovation. At some point, you want to do something new and all that, and we also need to rest. So we just have like a limited amount of time to do whatever we want to do. But we can put a machine there and it keeps running. So we've been having a struggle to have a breakthrough, not until recently, because now we now have big computing devices. You have cloud, you have internet, and you have mobile. And of recent, we now have advances in large language models that gave us chargeability, open AI, and all this generative AI. So now AI can do so much more. So we've had AI in the form of robots, in manufacturing. But now we're not having AI at our fingertips. Like we're not democratizing AI. And that is where the concern you raised in the introduction, that's where the fear is coming from. OK, now that AI is now, because it's everywhere, and we are kind of scaling the power of AI, then I'm going to admit it. Should we be afraid? Should we step back? How do we ensure that we are building responsive AI and making sure that it's ethical and all that? We are building responsive AI. Are we also building responsible AI? Because let's use the robot AI, for instance. In a Muslim country, recently, I was at last because so I saw the story, in a Muslim country, I'm emphasizing a Muslim country, an AI was built. And on presentation of this AI robot to the public, it was busy grabbing the butt of a woman. I'm sure in a Muslim country, they will not build it to do that. So which gives us the idea that this AI, is it really AI, is it really artificial intelligence, or they have a mind of their own now? Which even people have been saying that AI is starting to have fear. Because there's that fear that they might take over the world if they begin to have a mind of their own and doing things where nobody sent them. So there is this conversation around deterministic and probabilistic. Like where should we draw the line? Should we just allow AI to just run among and be able to generate things randomly, which is probabilistic? Or should we have a deterministic that we can predict the outcome? For the example that you raised, AI reflects the bias of its creator. So recently you noticed that Google had to pull back their AI because it was generating black images because they wanted to be diverse. And so the kind of data they trained that AI model on generates a lot of backlash in the community and they had to step it down. So that example in the Muslim community also, it may not be the AI, it could be the bias of the AI designer. Nobody will believe that in a Muslim country they will do that because they are reserved in a lot of things and that happened at the unveiling. And that's the challenge we are also having in Africa because AI needs quality data to be trained to be able to do things that we wanted to do. And that's why we need to generate more data so that the AI that we are building will be inclusive and we can always eliminate that bias from AI. So for example, the black, so when you are using AI in a justice system for example and you are a black person and the AI is already biased towards you that this one should go to you, like you are white so we should give you a soft landing. So it depends on the data that we train AI on. And of course we are not getting to a point where we are generating so much of AI data that those data are not being fed back into AI and that brings us to the question whether AI is now having a mind of its own because now AI is now using its own data to train itself. Not even what the humans are giving it anymore. Yes. Okay, so let's talk about cons. I know AI definitely has its pros and its cons, their advantages. I mean I can just stay on my phone right now and say put a prompt and give me what I want if I want to find out any information about anything AI will give me that. But let's talk about the cons, the disadvantages of this for instance AI getting a mind of its own and then even the people using it because when you look on the internet right now you're seeing like deep fakes you're seeing pictures that are you think this person is real but it's an altered image. In fact the other day I think I saw the news of a new one coming but it's not open to public yet and you're seeing these realistic images and now how do you even decipher which one is real, which one is fake? So let's just talk about the cons a little bit and what we can even do to ensure that we can just tell which one is real and which one is fake pretty much. Yeah, so you're right. Yesterday I was reading a news magazine and I think in the UK they had to pull back an article because they discovered that the image they used was digitally, was AI altered kind of. So it's a problem and it's a menace. I think what we needed to do first is that awareness like this that as much as AI is a force for good it's also a force for... It depends on which hands gets injured. So there are also development where AI is also checking AI. So there are AI tools that can also help also confirm whether this material has been digitally manipulated, altered that one. Sometimes also you just need to take a step back and just slow down. We are in a fast-paced moving world. So when you see an information when they share it in the WhatsApp before you just share it, just take a step back and let's go back to our old-time journalism parties of confirming the fact like try hard it. So say okay this thing is coming cannot check a different source cannot check their websites. So we just need to slow down. We are consuming a lot of information and it's easy for threat actors and bad guys to just use this AI to deceive us. So we need to slow down and ask ourselves the question and check other sources. And also sometimes this AI to although they are getting the deep fake AI are getting sophisticated or sometimes you can see that maybe lip-syncing is not is the lip and the audio is out of sync. It could be blurry and lead to the lightning environment may not be that sharp. Some of them are, some of them are really I saw one where a president was advising a Nigerian prophet on what to do to release some powers to make Nigeria great and the lip-syncing was perfect the picture there was no blurry picture there In fact if I didn't know that the topic he was talking can never come out of the that kind of a president I could have believed it. Now the question is these tools are accessible to almost everybody very very easily. What about the tools to help you check are they also accessible to people? So they are getting being developed and also becoming more accessible. So the thing really is that there is this thing we always say that regulation always catch up with innovation. So innovation for four starts they will begin to see how people are using things before we bring about countermeasures. So that's why it seems like they are still getting ahead. So tools are being built that you can use to confirm and sometimes because everything you have you create in the digital world really has a metadata that is data about that thing. So it's possible and it's probable to build tools that could verify whether this image, this audio, this video is a defect or it has been altered but of course it takes extra effort. And as this thing comes some will come with a fee and some also may not deliver what they promise. So we are gradually catching up to build tools to also police AI. So it's like AI, policing AI. So I want to talk about social media for a little bit because that's where you see a lot of this. You're seeing it on Instagram, Facebook, your WhatsApp, people are forwarding this stuff to you. Don't you think there should be a role that this social media platforms have to play? For instance, since we're talking about regulation I'm just thinking top of my head maybe there should be a watermark or something so that the next person for instance on WhatsApp if you forward a message now it tells you forwarded. Initially never used to be like that but just so that you know maybe it's not coming from this source and all of that so maybe like a watermark like that for you to know that this is a deep fake or this is AI altered, something like that. Don't you think all of these apps because they are the ones in the technological space? When you're developing things like that do you have the tools for people? Because I'm not going to start looking for how to develop those tools. I agree with you, they have the power they have the data they have that reach and the feasibility to do that. The question is in their interest to do because sometimes their business model is also about how many shares how many of such so it's more profitable for them. But of course we're beginning to see the big players developing responsible AI practices being ethical in the way they deploy AI and we're beginning to see governments also coming together to say this is a guide ray so like Europe for example very strong in regulation as to what you can do how you can use the data of their citizens how you can train where you can store those data and privacy and all that. So we need as a government and as a people regulator to be able to also come up with those ask to the platform owners to say when you're using your technology to create this these are the minimum standards required you need to warn your user they need to know that this is an AI generated thing and so also we also need to begin to use those two to understand how to you know evaluate them so this is what I mean right so if you use chargeability a lot then you'll be able to also see the document that is being forwarded to you because as a way you know it will be so perfectly written no grammatical errors some people are smart now though there are people who would look at that and then rewrite it they're rewriting it with the same words but they just change it so which is good I think that way you're using your own human mind your intelligence is working as well because we just want just giving credit to Expo I have a concern now you are saying tools are being developed and that means it's a process and I'm not very comfortable because what happens to our justice system for instance because that's where you need evidence people bring audio people bring video and all that so if tools are being developed so let me rephrase that are there tools already existing that if it comes to extreme cases at least in the judiciary in the courts of law that can be used to determine whether something is AI generated or not while we wait for the ones that will be accessible to everybody else so I read this story during the last week of somebody they caught that claim that that evidence that was produced was digitally generated that was not my confession you see so it's already so the question is is it lying is it truth so that's why we will continue to amend our laws as we are changing our laws to accept digital evidence we also need to recognize that digital evidence can be altered can be manipulated so we now need tools also to verify so things like timestamp checking the metadata of it and using AI tools also to deconstruct that digital evidence and know for example which software was used when was used and also sometimes also you still need our time tested paper trail like when I'm collecting that evidence that should be maybe a log in the premises that we signed that you were there we were there to record or to obtain this evidence so we will need secondary evidence as well not just the digital one yeah okay well whether we like it or not AI has come I'm not saying it is coming it has come you already said it has been with us all this while we're just trying to develop it into other areas right now but in order for us to be able to catch up as a nation Nigeria what do we need to do because if other nations get to a point where they are comfortable using it it might affect us if we are not up to that level so what do we need to put in place now to make sure that the future with AI we are not left behind so I think first is about awareness which programs like this is helping to promote the second is also education so you can't AI is a high tech so we need to play in AI the third is also we need to begin to make data available for AI so for example our local languages we need to start to write things down document things so imagine you're having a chargeivity trend your local language and you can now use prompt in your local language to say okay tell me like a joke in my local language give me like 10 proverbs that even your grandfather cannot even remember AI will be able to train you on those areas and we need to begin to see how we can use AI to solve our local problems because we are the only one that can solve that if we are relying on the visitors to come and solve it for us because they don't have a problem they will only build things that is biased against us that is not because they are malicious it's just because they are not aware of our problems so they don't understand our problems but what infrastructure do we need to be able to so beyond so there are different chain in the AI system you have the deep technical people will be writing Python will be writing but first AI is actually about mathematics and statistics right mother so we need good mathematician we need good statistician and we need good journalist linguistic we need people to be able to write well train people well so that is the foundation and once we do that then we need to begin to digitally transform our environment so that way because AI will be able to transform data so we need to begin to move a whole lot into data so that we can have things to train AI on and the next thing is that we need to have strong regulations and those regulations will help propel investment so for example if we have a regulation that says that all the data that is being used in Nigeria will make the big guys to build infrastructure in Nigeria and when they build infrastructure in Nigeria 80% of people that will manage that infrastructure will be Nigerians and when they are managing that they will also transfer their knowledge to Nigerians and we begin to scale it up so is there anything the government can do in regards to this infrastructure because most of the time when we think of the government policies we think of regulation but is there any way they can actually create this infrastructure for people because most times when you hear people into tech they decided to go study for themselves are there tech hubs by the government are there things that the government can just do to ensure that we're learning, we're growing and then we can start to compete with other countries in the world sure I've been following the minister of digital economy and the digital economy and he has this 3 T-Program that the government is rolling out and you know also he came from the private sector he built the first tech hub in the country, CC Hub I used to go there and we also have government hubs but the problem with government sometimes is there's no maintenance so if you visit some of those hubs now there's no maintenance so you recommend that they could have the hubs and then have a private public partnership then the next really thing I think government should focus on is our educational institutions because that's really where the impact can be felt let's update our curriculum let's ensure that our lecturers and our instructors understand the modern practice teaching methods and let them be able to begin to impact their knowledge to the coming generation let their projects now be AI driven not just copy and paste give your project to somebody the business center bind it and dump it down and begin to give students projects that are relevant to the problem that we are facing as a country and so if we do that we'll have a pipeline of people coming out of our educational institution that are ready for today's market I agree with you 100% because you see in China you see little kids they're already coding they're already like wow your future forward I agree totally our children speak in tongues so don't talk down on that this is where we have to wrap up the show we want to say thank you for coming and it was really nice having a conversation with you thank you too we've been talking about combating the spread of AI, OTAID, images and deep fakes online and we're talking to Bola Geoladi Pupo this is where we have to wrap up the show it's been nice having the breakfast with you at home with YAMGOL, Bola Geoladi and everyone we want to say thank you and we hope that we'll see you again tomorrow my name is Romer Paulsen and I am YAMGOL Aggadji let's do it again tomorrow, bye for now