 How? Here's where you can interact with us on our social media and that includes Facebook, Instagram and on Twitter. At www.youtube.com for an underscore channel on the ground, we are verified with a blue tick. I tell you, you can't catch us. But anyway, you can find us at www.youtube.com for an underscore channel on the ground and www.youtube.com for a channel on Twitter, Facebook, including YouTubers as well, and you can stream us live on what? Yes, Daily Motion. That's where you can find us. But we are a viable DSTV, GoTV, Signet, every TV distribution platform, you can find us at www.youtube.com for a channel. And on this interesting segment today, I'm so excited, if you have no idea, here is what we're going to talk about. The impact of generative AI. And literally, what are some of the good things that come with it? Literally, you've seen so many things, especially beautiful things, but you know some things that are also questionable. So today, we have two powerful guests in the studio that are going to paint for us a panoramic view of what exactly is happening in that space. And before I introduce them, you can find me personally at www.bryansoko101.com. That's my handle. And now, here's the time, we introduce our guest. So seated immediately on my right, she's an amazing, beautiful gorgeous lady. She's Nema Mujesia. She'll tell me if I've mispronounced her name. She's a communication specialist and she's currently working at a company called KIT, K-I-C-E-T-N-E-T. I don't want to mispronounce it, but she'll tell me. And then sitting next to her is Louis Mainga, who is a communications coordinator at Witness. Good morning, ladies and gentlemen. Good morning. All right, how are you feeling? Good. I love that you guys are giving chorus answers. How are you feeling? Good. It's like next question. I feel free to interact and share more. So I'll start off with you, Nema. First of all, did I pronounce your name exactly? Yes, it's correct. You know, in this industry, you can be sued for a lot of money for mispronouncing names. But just tell us a brief history about yourself and how you landed yourself in this space. Okay, thank you so much, Brian. What I can say about my history is that I studied media science, whereby I studied at my university, my main compass in Eldorette. And during this time, I was really interested in what the tech, the internet had to offer. So during my way up, I got to be able to get into the tech space. Now, I work under the name is Kickternet. Kickternet? Yes. Kickternet is an abbreviation for kickternet is a multi-stakeholder think tank for people or institutions who are interested or involved in the ICT space. We also act as a catalyst for reforms whereby we stand under four thematic areas, which is policy advocacy, capacity building, research and stakeholder management. Yes. And apart from that, we also have a space whereby we are able to translate the ideas that are given to us by our listeners. And through these ideas, we come about with meaningful proposals so that we can be able to know how to solve the issues that are taking place in the ICT sector as well. Right. Yeah. Interesting. Hi. Yes, Louie. Yeah. So nice to be here. My name is Louie Mainga, like you mentioned, and how I found myself in this space of communications is the passion I nurtured and tapping into the potential of using various regimes of communications. You know, looking at communications as an art and as a science to create impact in society and of particular interest to me, human rights impact. And so I work for this global human rights organization currently called WITNESS, where we empower people to use video and tech in the defense of human rights. And like you agree with me, video is a very important tool of communication. And even so, a very critical source of human rights documentation and storytelling. Yeah. Like for example, if there were an incident of police brutality down the streets here, chances are very high that somebody would be filming that on their smartphone. And so you might even find that footage trending on social media in a very short while. Yeah. And given how access to smartphones has been democratized, you know, it also means the power to witness violations of human rights is literally in the hands of millions of people, you know, using their phone cameras. And so at WITNESS we endeavor to step down knowledge, you know, on how to film violations of human rights, how to preserve, you know, these videos, how to share these videos leading to justice and accountability. All right. So that's a brief of good history about your background. Now, let me get back to you, Nehama. When it comes to AI, you know, a lot of people confuse it with robotics before we get into now what are some of the types of AI. Yes. Initially, me too, I used to think that, you know, AI is, you know, at some point we heard the story of Samantha, Sidriki Dogok Dogom, Ginnaka Kujane to Alicia, where, you know, there was this metal in form of a lady that can talk to you and, you know, meet your needs. So I used to think, you know, that's artificial intelligence until I had to read and do my research, which I came to learn artificial intelligence. In fact, it's more human-centered than robotics. So for a person watching and they probably don't understand it, do you mind just painting a little picture of maybe what could be the line between a robot and artificial intelligence? Okay. Thank you so much. That is quite interesting. So what I can talk about is now you have, when it comes to generative AI, we are now seeing that what you are capturing is the forms that we have in generative AI. So the way a robot is able to interact in a way that it seems human-like based, it's actually called machine learning. Right. So there's machine learning and there's robotics. So when it comes to machine learning, it's whereby the system, the AI system, it's a type of AI system whereby it is able now to interact, answer questions the way a human can answer. That brings me to what you call the generative pre-trained transformer 3, which is chart GPT. Right. So that is kind of what happens, that's what happens there. So when it comes to robotics, robotics is also another system whereby now it is taught on how to interact physically to the world. So by this I mean things such as the drones that we have, the applications as well. So it is, these are just different systems within the forms of generative AI. Right. Yes. So there's machine learning and there's robotics. Right. Yes. So when it comes to generative AI and how you guys came to tap into that space and Louie you can talk about it from a technical perspective, how did it come to exist? Maybe is it like an invention or just a development? Because in tech there's always something new, shocking. You might think that today we have drones, tomorrow we'll be having, you know, aliens running something in form of tech. You know, there's always something new but shocking. So how did it narrow down to generative AI? I'd like to segue from what Nima was saying, generally about artificial intelligence as being a technology, you know, that mimics decision making capabilities and problem solving capabilities of the human mind, you know, and they're different, it's a very wide field which has different subsets, you know. And generative AI is one of them. We also have discriminative AI. Or discriminative AI. Yes. What does that mean, interesting? Yeah, it's more of computing, you know. Like it said that it gives you a yes and a no answer, you know, such kind of technology to say, this is a lion. This is not a lion, you know. That is yes and no when you look at it typically. And then there is conversational AI and many others. So you've heard of this tool called Siri, or probably you're using it. Yeah, I've used Siri a lot. Or even Alexa. Let's say, hey Siri, play some song. Yes, something like that? Yes. And then it's such as... Just a sec. Yeah, just a sec. That's now voice AI? Conversational AI, yeah. And Alexa as well. Still trying. Still trying. And Google Assistant, you know. And even Google Maps. These are classical examples of conversational AI. And even most recently, what Nima said about generative AI, there's this tool that has blown all over the place. Lately it's called chat GPT. And the reason why it's one of the most popular generative AI tools currently. And the reason why it's called... But there are many. I saw a list where it was like around 10, but chat GPT is like number one or two. Yeah, yeah, probably. I think it's because it was a fast mover in the market. Okay. One of the reasons why it's popular. And so the reason why it's called generative AI is because it generates new content based on the data it is fed, right? And so when you prompt it, it relies on text prompts to generate detailed responses, you know? Line by line and almost to perfection. Almost to perfection. It's not perfect, right? Right. When you look at it from that perspective, I think there's a lot of sophistication and potential to scale up this technology. There are the popular ones which are text to image. So basically you type in a prompt and it generates an image. One of those popular tools is called Dali. And the other one is called Mead Jani. The other one is called Stable Diffusion. There are tens of them right now. I'm sure lately you've seen images, you know, AI generated images of prominent world leaders, you know? Like the pope in the puffer jackets. Right. Are we able to get them? Okay, we'll get them as we go. Continue. Yes, the pope in the puffer jacket, you know, and the pope running away from the police, a former president of the U.S., running away from the police as well, you know? And when you look at these images, they seem to be very, very realistic, but in real sense, they're not real images, you know? And interestingly, you can ask a tool like Dali, you know, to generate a picture of a lion and a rabbit sitting next to each other. And it's able to generate such a real photo. But I'd ask you, I'd invite the viewers, you know, to ask themselves in real life scenarios. Where has a lion and a rabbit, you know, sat together and, you know, generated that chemistry? It's not even possible. Yeah, it's a real image. Yeah, you try to share. They're still figuring out the other thing. But no, I'm stuck at that place where, you know, I'm trying to think of it from a place of Photoshop. Is that not an advancement of Photoshop? Because initially, you would even edit out a video. And the background, you are in Atlanta having fun, but the video is just short here on why in the morning. But the background, is that not an advancement of like Photoshopping and editing out or airbrushing? Yes, obviously, these technologies, emerging technologies, you know, are becoming more sophisticated. And you see, as a developer, if you're producing, if you're developing a tool, somebody else would, obviously the tool has flaws, and somebody else would pick on that and improve, you know, or you can even improve another, to produce another version of it. So the basis of the model is basically, you know, the same. But there's the way the developers, you know, and tech deployers scale that up, you know, to make it more sophisticated, to add more features. And with AI, these models are trained, you know, they are trained to act within some probabilities, you know, a set of data, right? Existing data, largely from the Internet. So that's how it's easy to prompt, and it gives you an almost accurate response, because it's crawling for data across the Internet to give you, you know, the results in real time. Alright, let me get back to you Nehma, still on that note. When you compare Google, Google Search, and now it's your GPT, since you brought it in. I'm also trying to, like, nice, there's an interesting photo. It's going to be coming through in just a bit, you'll see it. Google and chat GPT, which is, you mentioned of another form of it. How can you differentiate between, you know, Google, you can Google the president, and it brings you everything about the president still, but on a website or on a link published maybe by someone. But then now chat GPT comes in, and it's also gathering this information. Are they not the same thing now? Apparently they're actually the same thing, but the other one is different, because I have been able to use Google, I have been able to use chat GPT as well. And I can say when it comes to chat GPT, the commands it is given commands and it's human-like, text-based. It's a machine learning system. Comparing it to Google, Google just takes data in. It doesn't give you, like, that feeling of connectivity, physical connectivity between you and your device, your IoT device. It might be a smartphone, it might be a computer, so that's the difference. Also, when it comes to the content, chat GPT has no ads. It just gives you, when you ask a question like, how many presidents are there in this country, it gives you direct answers. So comparing it to Google, it is more direct, and Google has ads, it has, and also when it comes to the information it gives, it gives direct information. When you're using Google, let's say you're searching for a recipe, it will first give you all those, why should you use the recipe, why should you do this and this, but when you come to chat GPT, it just gives you what you want, exactly what you want. And then when comparing it to Google, it is still a learning machine. Google has been there for a very long time, so it is widely, it has wide data compared to chat GPT. In fact, now for you to use chat GPT, you must be in Google. You can imagine, yes. So it must, so it is just, maybe in time there will be a way whereby, it can be like an app, or like Google in itself, yes. It's just a mimic of Google, but the thing is they're trying to make it more, when more closer to the source, the data becomes more closer to the source as well. So that is the difference between Google, the basic difference between Google and chat GPT. Right, interesting. Now when it comes to user friendliness for generative, before we get to how we can apply it to use it and some of the interesting trends, how can we make it in a sense that, you know, it's not shocking, because nobody wants to have manipulated data, and of course that's why you can even now manipulate data, and data can be in form of numbers, even graphics as well. How can we make it safer for consumption or for use? First of all, I will say that the first thing is, now the system has to have a role-based system, whereby the system makes preempted rules towards how a person will react, so that with these rules we are able to find, how can I put this, the way whereby the information that is given there is factualized. So with chat GPT actually there is something I was watching today morning. If you want the right data sources, you have to check the chat GPT. So you key in your information. Let's say I want to, this book from these resources, fact check first. So it fact checks itself. We have now to teach the system how to be factual. So that is one of the ways. Also we can be able to have a place whereby we are using, the systems that we are using in the internet are watermarked. Like in the case whereby I have generated an AI picture, I watermark it that it is AI generated, so that the audience cannot be informed on what they are getting. Yes, also we have to develop some guidelines when it comes to software development and dissemination. Are there in Kenya, like are there organizations, has the government already tapped into that space in the country? Yes, I think as of now we have the Office of the Data Protection Commissioner. I think you've heard about it. So they are trying to come up with stipulated guidelines through considering our four thematic pillars which stand for policy advocacy whereby they are trying to create awareness at first. So when they create awareness they come to capacity building whereby they provide trainings as well. That's also what we do in our organization as Kickternet. So it is just a learning curve that we have to now take in and actually we have to agree that technology is here to stay and it's not going anywhere. So apart from advocating, apart from capacity building, we have also to have a stakeholder management whereby we are saying that we need to involve ourselves in such things like the Metaverse. The Metaverse was here last, the other previous week. They were having a Pan-Afric Summit at Kempinski and they were able to tell us that we need to come together as tech innovators. Even content creators. Yes, content creators needs to come together so that we can be able to speak the same language when it comes to technology. What are the guidelines? What do we consider misinformation or do we consider disinformation? What do we consider information that will affect a victim? Just like in your previous program, I heard you talk about the Millicent-Omanga scenario. These are things that are happening online. So where do we draw the line as people who are users of the internet? And not Millicent-Omanga, like now for a video that has been manipulated, let me use the word manipulated, or has been altered either. How do you fact check? Because that's another form of AI now. On the other side, before we talk about the application, how do you fact check a video like that? It's already out there and from a person of a certain personality or a certain reputation in society, already people have already viewed you in a certain way, whether it's true or not. And there's no way you can clarify and say, no, that's not me because there's a lot of things that are now falling apart and it's tech. Yes, so when it comes to fact checking a video like that, there are so many factors that come in. There are so many tools that can be found online that are used to fact check videos. So we have a tool in Google as well that you can use to fact check images. And also in the Play Store as well, we have the tools, the AI tools that can be used to fact check. But just before you fact check, you first need to consider the victim. What can the victim do in order to be able to say, this is not me, this is me? First of all, the victim has to consider the fact that the video is out there. So what we do at Kickternet is that we have developed a digital inquiry kit, which is an online safety module that helps you in understanding how to stay safe online. So like the case scenario where your video is out, you first need to start reporting. You report the video. When you report, you encourage others to report for you. So the system, you know how the system of the internet works is when a certain content is really liked or really talked about, that's when it curates itself to the top. The more people target, share it, Google it. So that account is easily detectable and also the perpetrator can be well known if it is seen out there. So after reporting that incident, the victim also has to figure out what do I do next so they can block that account as well. It is your right to say I can block this account. So apart from just using the tools to now like say that it is now over. I've used the tool and I've proved to you guys it's wrong. We also need to consider what can the victim do as well. Yes, in such a case scenario. Interesting. And to also add to Nima's point, there is this thing called media literacy, which is the ability to evaluate critically information you've been exposed to online. So it's always good to take a pause and ask yourself very basic questions such as which is this account that is reporting this video or sharing. Or sharing, right? Misinformation. Yes, and also there are other credible sources sharing the same information. And even to detect misinformation and disinformation, you can apply both tech and non-tech methods. But if you have the mindset already, it becomes easier for you to discern what is true and what is not. Information. Awareness. Yes, there's also platform responsibility, right? Where this material is shared on these platforms. I think platforms have done well but could do better in terms of moderating such kind of content. YouTube, for example, has a program called YouTube Flagger, which confers powers to users to flag content. They find malicious. Twitter and Facebook and Instagram always label, or sometimes they do label content, which is malicious. So although they suffer moderation, you know, capacity issues, like in terms of language. You also realize somebody in Atlanta can cast, but if you cast in Kenya, you're suspended. Yes, yes. So there are those gaps, obviously, which could be improved. But I think media literacy and platform responsibility is important in carving issues such as, you know, the sharing of non-consensual sexual images, which really affect women disproportionately. Alright, interesting. Now how do we package it to a point where we are using it in our daily lives? I love the photos, even the Pope's photo is really interesting. It looks very photographic. Exactly, because you couldn't have imagined the Pope one day would look like a hip-hop rapster or something. Is there a way that we can use it, especially in our day-to-day lives, even in office setups, education setups, even in TV setups, or even health sector as well, like many other sectors? How can we apply generative AI? Yeah, there are a plethora of applications, you know, good uses of these emerging technologies, right, in various sectors like you mentioned. So in education, for example, you know, in terms of development and adaptation of curriculums, right, these are things that AI can really help with, even in terms of e-learning, you know, virtual learning. And when you think about healthcare, you know, the ability it confers to healthcare professionals, you know, to generate, you know, like models, you know. So for example, if somebody, if a surgeon wants to perform a surgery, right, they can simulate that scenario using AI, first of all, as a group before doing the actual surgery, right, when it comes to agriculture. There's something called precision agriculture, you know, which is able to predict, using AI to predict weather patterns, you know, and therefore being able to inform, you know, like the kind of crops that would do best in that particular weather, you know. Set like mapping. Even conditions, yes. Right, soil testing. Soil testing, GPS. Yeah, and all those things, yeah. And so it's very cross-cutting, and I think Nima is a tricky person here. Please come in, come in and shine more depth, because it's really interesting now, it went like from an up-manipulating photos, or generating, you know, photos, but you mentioned it's very important to water market and give credit that this is an AI-generated graphic. Yeah, well, we come in terms of applications, as I'll second my, I'll second Louie and say that he was right in healthcare. They've actually developed a lot of drugs. Now it's easy for the healthcare system, and also they are now developing 3D and 4D imaging, whereby when you're doing X-rays, you are now seeing the hip bone as it is, you see. Exactly. Yes, exactly as it is. So that is a trend that is called explainable. Right. Their way, their logarithm is now transparent and is explainable. It gives doctors a way, a form whereby they can be able to see, you know, to know what they're getting into, and also in decision making when it comes to patients. Apart from that, we also have the e-commerce system, the finance system, like what happened to Naiva's previous, recently. They are able now to dictate where the fraud came from, you see. Yeah, they lost data. Yes. But now, I think they were able to recover it. They recover the data? Yes. What if, now, let's stay there. What if this data landed in the hands of somebody who manipulated it and has a copy? Because now you're dealing with people's information that includes bank accounts, their names, their ID numbers, photos. Yes. Yes. That's dangerous. That is very dangerous. And maybe you have a Naiva's shopping cart, whatever. Yes. Now, I think the best way Naiva's can be able to address this situation is to ensure that just tell the client your data is out there. Give them a warning. Just give them a warning so that they can be able to know that my data is out there. So what should I do next? Should I close my bank account? Should I go to the Hyoduma Center and say my ID was stolen? Should I change my number? So, you know, creating awareness is really important. So these organizations need to be really transparent if data has leaked. Right. So that is the major step that they can take. Right. Yes. And what could have possibly gone wrong? Like an infection, a bug or somebody hacked into it. I'm logging into many software because especially for Instagram, it's very common. Don't go. Your account is not there. And you're really trying to process who is this devil trying to log me out of my social media. Yes. Only later on to learn that, you know, somebody had your password or mastered, you know, your PIN or something. What could have possibly gone wrong in that scenario? Because they have not yet shared the information to the public, so we do not know. So I can just create a speculation. Maybe they could hire you to help them out. Okay. Whereby I can say that our AI, what we call data security is human, isn't it? True. So when we come to software developers, they are people who are hackers. So we have the white hackers and the black hackers. And now hacking is a job. Hacking is a job. It pays you. I can hack from Thailand. Right. I can hack from Kenya and I pretend I'm in Thailand due to VPNs that we have nowadays. Right. So yes, the VPNs are there to use. Right. We'll talk about VPN now in this millisecond or manga case. Yeah. All right. So when neighbors, okay, sorry, I won't mention it. But it might be, you know, with these hackers around and the black hackers. Right. Black hackers, meaning the ones in the country. The ones that, no, black hackers, I mean, okay, black hackers are the ones who do illegal activities. White hackers are the ones who do it legally. So, yes. Like somebody has a certificate. Yes. And even documents professionally. Yes. In the institution, you may say, we do not want our equipments to get lost. All right. So you hire a white hacker who will come and install equipments here, like CCTV cameras, and hack the system and pretend to be a black hacker so that they can create and develop better systems for your organization. But that is the same person who can also buy TV. Yes. But they do sign contracts as well. Oh, they sign contracts. Yes. They sign contracts. Like NDA. And they sign them in towards what they do. All right. Interesting. Yes. The hackers are there, and these systems have flaws. No. I would not lie. They have flaws because they're developing every day. Right. And with the assistance of software developers even being transparent as we tell them to be on YouTube, if I learn a system whereby I can hack. Right. I'll be able to use it. Maybe I'm a youth just on YouTube, I don't have so many priorities, I don't have so many things to do. I learn one or two things about hacking. I go to the CBD, I go to outside maybe a hotel in the CBD, I enter into the Wi-Fi system, I am able to hack into the Wi-Fi system. So anybody who is using that Wi-Fi system, in case they do a bank transaction. Right. Or logging into an app. Yes. Or logging into an app, or just, we say when they are doing very, this critical system such as bank transactions and PESA transactions, and just where you put your pin and password. If I am outside there and I am a black hacker who has trained myself on YouTube, you see, I can be able to, to hack you, yes, I can be able to hack you and manipulate your data. That's why we even encourage people, when you are in a public space, do not, when you are using Wi-Fi, do not do any bank transactions, do not put in your pins, especially when it comes to email, when it comes to your phone passwords, when it comes to your IDs, do not put such information, information in your, in your phones or IoT devices that you have. Right. Interesting. It reminded me of the story at the Anabahati, where the YouTube channel got suspended, but, you know, they have the money to hire people to undo some darts. But there's this artist, there's a friend of mine who is a content creator as well. This YouTube got hacked and then he took like a month hired somebody to do it. Now I pity the artist who don't have money to, you know, undo some things that would have saved you from the pain of losing the hard work you put in for 10 years. You can imagine now, a white hacker is there, or a black hacker, to just undo that damage. No, it's the white hacker. And the white hacker is going to undo that. Yes, he's going to undo the damage. But also it is so sad that, you know, somebody like from an artistic point of view, an artistic industry whereby you're trying to make your music better. You've developed content for like five years and then within no time, yes, or even more. And you have so many. You even have the Plank Card of YouTube, the Silver Plank Card whereby you're getting now income. And now you get that your content has been shut down. Yes. And they're telling you they can't recover it soon. Yes. And I think there should be a system whereby someone should be able to recover their data. Right. And I think in both offices in Kenya, I think we have Google and Facebook. Facebook, I heard they shut out. In Kenya, I'm not sure. There's Google. There's Google. Yes. And Meta as well. Meta, right. Metaverse. Metaverse, yes. Which is now combining Facebook and Instagram and WhatsApp now. Yeah. All right. Yeah. I wanted to enrich the discussion and it's very interesting how, you know, there is black hacking and white hacking. I didn't know that. I swear to God, I didn't know there's white hacking and black hacking. Yeah. And for the blacks, obviously, you know, it's a negative connotation. Why should they be the ones who are the bad ones? Right. And the white are the ones who are the ethical hackers. White collages. Blue collages. Yeah, there is. Yeah. It's like a thin line between color differentiation. Yeah. Yeah. Like how the, you know, shows how the black race has been, has been framed really. Right. Yeah. The connotations are just. Yeah. Even the word blackmail. You know, why don't you have whitemail? Exactly. There's even somebody who is arguing, why should a car have black tires? Yeah. Yeah. But then there's somebody who came back with a comeback and said, but we use white toilet paper. So what are you telling me? I was like, wow, that one you named it. We use white, but there's, I've not seen red toilet paper or pink, I've seen it. Yeah. Yeah. Yeah. There is a need for organizations, mostly organizations and even individuals to invest in, you know, robust security, you know, when it comes to online hygiene, online hygiene. Yes. Sometimes called digital footprints. Yes. Exactly. Yeah. Cyber hygiene as well. Yeah. Cyber hygiene. Right. Yes. Because you can imagine the incident like the hacking of Kabarak University. Right. Yeah. I saw that. I saw that. Yeah. Somebody from Netherlands or what country was it? I even don't know, but, you know, in terms of reputational damage, yeah, it really cost, it cost the university, right? So there's a study that came out, I think, a few years ago to show how Kenyans save their passwords, you know, using their names or date of birth. Very simple. So it's possible to at least have among 10 Kenyans, at least five of them have a password related to their name or their date of birth. ID number. Yeah. So it's very easy to hack, you know? Right. And these are the same security features they apply to, you know, as admins of those pages. Like for Facebook, you need to be an admin, you know, for you to manage a page, right? You need to be a profile, an individual profile to manage a page, right? Right. So somebody can use that very easily. And I wouldn't be surprised if they went through one of personal, you know, the personal profiles of this person, yeah? So there are things like two factor, authentication. Very authentication. Yeah. So we're... Even for Gmail now. Yeah. If somebody is trying to log into your Facebook, you know, in an unauthorized manner, you can be able to arrest that, you know, before it happens. Because you'd always get a notification that was this you tried logging in, you know, from Kiamboo. And yet you are in Nakoro, right? So you're able to arrest that situation. And also investing in capacity. Like can we have retraining, you know, of these social media managers, you know, administrators, web managers on security? Because the threat keeps on evolving. The hackers are getting smarter, you know. So hence the need to be always ahead of, you know, the curve or aligned to trends that are happening in terms of digital security. Interesting. Now, we had mentioned earlier, VPN. And recently had a conversation with a friend who was saying he doesn't use the Kenyan VPN because there's content he views. And I was like, bro, what do you mean? Let me just use the Kenyan VPN. And then he started talking about my Google search history. He was telling me that for me... If I gave you my Google search history, you'd judge me badly. You'd say I'm a bad person. You'd say I don't have more. But I was like, why? You are what you are. Why should you be so scared? So to a point like you don't want anybody to know your search history. And he was telling me about stories. How is it important now like to change VPN? Okay. It depends with who has the VPN. Okay. So if it is from a genuine perspective whereby like the case of your friend is going to be judged wrongly by the society or by even you and you're calling yourself his friend, you know, it really raises alarms. So VPN, how they normally work is an undictatable system. Right. Whereby I cannot know your location. I cannot know which account because it is anonymous. Yes. Account. Right. So for this case scenario VPNs are normally used to get white collar jobs as well. Because not most white collar applications can suffice with what we have here in Kenya. Can suffice with the IoT device that you have. Because looking at our geographical or where we stay, where we stay in Africa or in Kenya, this means that some jobs are just meant to stay in the western side. So when you have a VPN, you can be able to say I'm working from California. Oh right. You see. But that's cheating. It is cheating. Cheating in some form of way. But good cheating. It's like, yeah, in a case scenario maybe, because this is a youth channel, just imagine a youth who does not have work, who has parents who are looking up to him or mostly it's a him compared to what our culture believes in the society. Right. So you find out this youth has no choice but to search for VPN, a paid VPN, which can be paid for one or two years if you want. Right. So you are able to work and say you are in the U.S. just to get the dollar currency. Right. Yes. So you will use PayPal and nowadays PayPal has been able to connect with EMPESA. Right. So you can easy transfer your money from PayPal to EMPESA. Right. So that is also, take in there, that is also generative AI in there somewhere. Right. Yes. So looking at a VPN, it depends, as I was saying, it depends with who has it. So that is for good people. Right. For bad people who are trying to... Like now the Millicent or Manga, somebody posted that video. Yes. Acted like they're not in Kenya, but they're just here. Yes, they must have used that VPN. Yes, they must have used it. And you can't stress them and get them now. It is impossible. Right. It is, you know, you know what happens. Because now you hope to be now murdered yourself now. Yes. So, okay, the internet is a developing space, as I've said. Right. Soon these algorithms that are there are going to be... We are going to have a system whereby we can now have people who can detect them. But now, you know, in every 12 disciples, there's always a Judas. Interesting. So we can say that these systems we are learning. And as we learn, we will be able now to get... This is the person, this is the person who did it through this VPN. But it is a learning curve for us now. Right. Especially for software developers as well. Because I feel like this is the time Africa is working up. There's usually that soundtrack, Wake Up Africa. Yes. Where tech is moving fast. AgriTech is moving fast. Yes. There's innovations, there's scientific discoveries and many more. So it's, like you said, it's a continuous development curve. Yeah. Now, are there maybe possible solutions and recommendations as well? Have we talked about the trends? The trends in generative AI? Yeah. Yeah, I think we've touched... Channed on a few. Yeah, we've channed on a few. We've channed on explainable AI. Right. We've not channed on ethical AI. Yeah, please, you can talk about it. We have guidelines set in place. Right now, like the Office of the Data Protection Commissioner, whereby they're setting guidelines on where do we draw the line on this. I think we talked about that as well. Right. We have what we call AI democratization, where now we have accessibility and inclusivity. Now, there's this thing that organizations are now adapting to, is whereby they're trying to now say, persons with disabilities, what can we do about people with disabilities? Sorry, people with disabilities, what can we do to include them in the internet space? Right. For us, Kickternet, we have been able to develop a website for them so that they can be able to say... When a person is blind, hearing problems. We have what you call alternative texts, what is called alt text that people can use. It is a form whereby I have an image. Okay. So let's go here together. I have an image, and then the image, maybe it's of me sitting down. So I'll write... Do we have those images? Are they ready? Please, if they're ready, she'll talk about them as we exit. Yeah. There is an image, and then I'll write my description down there. So what the system does, that is alternative text. I think you have seen an alt on images nowadays. Right. Yes, and all that. So that is democratization of AI. Yes. So apart from democratizing AI, we have what we call the ethical issues we have. I think those are well-versed areas that we've talked about when it comes to the trends that are there in AI. Yeah. I was at the summit, I told you how two minutes. Yeah. You can say your final remarks, and then you'll say where people can access you. Yeah. In terms of the next level, the trends, there's something called artificial superintelligence. That's right. Like I can literally... The next level. Yes. Like read brain signals, you know. Ah. Like right now, you're prompting. You're just prompting. But can you imagine a technology that is able to... Read what you think. Read your mind. And produce. Body language. Yeah, on a screen somewhere. That would be outrageous, right? Exactly. And it's beyond human intelligence and even uncontrollable. So I think in terms of the implications to trust and truth, you know, like what you mentioned, there is a need for, you know, responsibility, pipeline responsibility. Yeah. Like developers, deployers, you know, to even other people, upstream stakeholders such as regulators to ensure that the product design of these tools, you know, is grounded in ethical considerations is responsible. And also the data protection is there. Yes. That's why the data protection is there and other stakeholders. And a central theme of witness, strategic vision, you know, is to fortify the truth of critical voices who are defending human rights so that their efforts and their agency are not threatened by, you know, emerging tech, issues caused by emerging tech, like misinformation and disinformation. And they do this across our pipeline. From when you're filming a video, you know, from when you are preserving, from when you are sharing it and presenting it, for example, to an accountability mechanism like a court of law, how do you ensure, you know, that evidentiary value is, you know, is very high? How do you across that pipeline? But most importantly, I think it depends where you see it on the tech spectrogram, really, whether tech is good or tech is bad. As for me, I'm a technology optimist who, you know, prepares and thinks like a pessimist. Right. Like prepare for the best but pray for the worst or pray for the worst but still hope for the best. Yes, yes. So for more insights about our work, you can visit the witness website, which is www.witness.org. And even better yet, you can track the generative conversation on Twitter. Hashtag Gen AI Africa. All right. Is that your final point? Yeah. Social media. You've said you've set the website on social media as well. Our social media is at witness underscore Africa. Okay. Yeah, Nemo. Thank you so much, Brian, for having us here. And what I can say about this conversation is that we need to consider that AI is growing and it's transforming how we create and how we think. Right. So it's a matter to take consideration, to take great consideration when it comes to research. Right. We need to have more research, especially for the youths who are becoming content creators, innovators, software developers, who want to work in the tech industry as well. We need to study, we need to learn. There is no way out of this because without learning, we cannot have growth. True. We cannot grow, even if Metaverse comes here and explains what they want to do, we cannot absorb what they want us to learn. Right. So it is proper for us just to know that tech is here to stay. Can we include it in our school curriculum, if possible, as a recommendation? As a recommendation. Yes. Actually, I was to tap on that. You know, they are saying AI is difficult, but it was made by a man, the brain to eyes. True. It is made by a person. It's not a robot now. It is not a robot now. Because initially, you thought AI is robotics. So it is like the case scenario where he was talking about the use of superficial instruments. I was watching yesterday, China has now developed headbands whereby students in class can be... You can see if a student is concentrating or not. So when the headband turns red, the student is concentrating. So you can't daydream Kidogo. You can't daydream. It will turn blue and the teacher will note that this student is not concentrating. That's dangerous. Yes. It's not dangerous if you put it, you know, at the end of the day, the parent gets a graph. What did my child do? But I'm seeing it, it is not dangerous because... But now, is it connected to your brain? It is that super... It's not connected. It's actually... It reads, actually. It's just... It's trained to read. It's trained to capture vibrations. And signals, brain signals. Is it related to a lie detector machine? A lie detector... I've seen recently, so now, Kenyan, who did a lie detector test. Yes. How do you detect if somebody is lying? Because I've seen it happen in American reality TV shows where, as powers who have taken to court and they did a session, they asked them like ten questions. And the ones they answered wrongly, the machine was able to detect you slept with someone. Why can't we have the same in Kenya so that we solve these cheating cases? Yes. I think the tech is actually quite expensive. Oh, it's very expensive. It's very expensive. So, for it to come here, even... to come in the country, you need first to understand it, you know. You cannot just be importing and exporting things out of the country, you know. So, how this lying detector works is that it actually detects your nerves. You know, when you are anxious, because you'll be asked when you're lying, your heartbeat starts racing. But you can be a chronic liar. You can... But there are people, I won't lie to you. You lie professionally. There are people who have already mastered this system of lie detection, so it is actually there. They can pass through a lie detector test. Yes, they can pass through a lie detector test. So, let me just do my parting shot because I've done it. I'll then encourage people to log in to our website, which is www.kickternet.or.ke. And when you log in there, you will find most of the works that we've been able to do from policy advocacy, capacity building research and stakeholder engagement. And also we have a mailing list subscription, whereby we have numerous conversations with different people from different sectors and fields concerning what is happening in the tech space. So, I'll encourage people to do that. And also for all our platforms, including Instagram, Twitter, Tiktok, we had to go to Tiktok to get into the trend as well. Facebook and Twitter, we are known as Kickternet. Thank you so much. All right. Yes. This is the number. This is the bonus. There it is. It's all right. It's all right. Thank you guys for coming through. Thank you. It has been quite an insightful conversation. I didn't know about... I'm still stuck at a black hacker and white hacker. I had no idea about that. Thank you guys. And I wish you the very best. Thank you so much. So, we have been speaking to Louis Mainga, who is a communications coordinator at WITNESS. He has shared very insightful details about, you know, generative... including the beautiful Neyma Mujese, who is a communications specialist from Kickternet. And definitely, if you have a problem and you need a solution, they are the experts. They'll definitely get to you. And on this note, we're going to take a very short break because Calamity Valley is coming up next. They're going to dance. It's about, you know, if you don't have choreography, it's time to Google how to dance right now. On that hashtag, why in the morning personally at BrianSek101 at Y2 for for underscore channel on the ground. Here we go.