 Good afternoon, good evening, and welcome to the AI for Good Global Summit all year, always online. My name is Ksenia Fonten from the ITU, the International Telecommunications Union, and I have the privilege of introducing today's webinar on AI and financial inclusion. Now the ITU is the United Nations Specialized Agency for Information and Communication Technologies, and we are also the organizers of the AI for Good Global Summit alongside XPRIZE Foundation and in partnership with 37 UN sister agencies, ACM, and Coconwind with Switzerland. The goal of the summit is to identify the practical applications of AI to advance the sustainable development goals and scale those solutions for global impact. Like most of the world, the AI for Good Summit has gone digital with the weekly programming allowing us to reach even more people across the globe. And before I introduce today's moderator, let me go over some housekeeping rules. If you wish to ask a question, please submit it to the Q&A tab. The moderator will select and read out the questions to the panelists, and we are particularly counting on your participation to create a very engaging discussion. And speaking about interactive, here is the first challenge for you. Could you please let us know from which country or city you are calling? And just send your message to the chat and make sure to enable it to everyone. And let me do this first. So I'm calling from Geneva. Okay, so it's going really fast. We have people calling from London, Spain, Barcelona, Sweden, USA, Brussels. Fantastic. Welcome everyone. And now I would like to introduce our moderator. Her name is LJ Rich. She's an inventor, artist and one of the presenters of a very popular TV show BBC Click. LJ, welcome. And the show is all yours. Hello there. Thank you. Thank you for having me. Can you hear me okay? I hope you can. My name is LJ Rich. As Cassenia said, I'm TV presenter, music artist and AI composer. And I've been monitoring global technology trends for some time now. I'll be your moderator for the next hour. And this is part of the AI for Good Global Summit, which is virtual this year. Throughout this panel, we love your interactions and questions and chats are all allowed. So thank you so much for choosing to spend the next hour with us. Those of you hearing the term financial inclusion for the first time, one way to explain it is everyone can afford secure financial services and products at affordable prices. And they could be deposits, fund transfer services, loans, insurance, payment services, even a bank account. But as an example, if you don't have a permanent address, how can you get a bank account? And that's financial inclusion here. It also means not paying extra to access the same financial services as wealthier people. And many studies show that this would in fact boost prosperity for everyone. And it's an enabler for seven of the United Nations Sustainable Development Goals or SDGs. So our panel today is going to rock. It promises a fascinating insight into what's happening and what's possible with some formidable panellists. As usual with these events, we're reaching around the globe, Singapore, Canada, London. So good morning, afternoon and evening to everybody joining us. And thank you to you, our ever-engaged international audience too. So it's time to introduce our guests for today. Please turn your video and audio on and imagine a round of applause for Rory McMillan from McMillan Keck Attorneys and Solicitors. Kamarjit Singh from the Bill and Melinda Gates Foundation and Alexandra Ritzi, Senior Research Director, Center for Financial Inclusion, Axion. Welcome, folks. Thank you all so much for joining. So what we're going to do is we'll start with a five minute talk from each panelist. Kamarjit, you're going to be up first. Thank you for joining us. Take it away. Thank you. Good afternoon, everyone. It is my pleasure to be a part of this panel discussion on AI and financial inclusion. For me, both of these terms pack a lot of complexity. While AI represents the cutting edge of software and computing, financial inclusion highlights the stark reality that approximately 1.7 billion people in the world are still unbacked and don't have access to formal financial services. The shockwave from the COVID-19 pandemic has shown the importance of digital connectivity and the ability of governments, private sector, and development organizations to rapidly provide assistance through digital channels and financial accounts. So let me begin with a brief introduction of the financial services for the poor strategy. I hope you can see the slides, Jim. Thank you. So I'm part of a team called the Financial Services for the Poor Initiative. And I wanted to talk about, begin with the introduction of what are its objectives and why AI may have the potential to significantly impact them. Next one, all lives have equal value. These five powerful words are what I saw every day as we entered our office in Seattle. And they continue to inspire me and my fellow impatient, optimistic colleagues every day. This is a guiding principle that underpins the foundation's work across all its strategies. Our team's objective is to make markets work for the poor, take risks that others can't or won't take, and fight poverty through sustainable economy-wide efficiencies. Affordable and accessible digital financial services delivered through mobile connectivity is a core enabler of our strategy. We operationalize our strategy through three major pillars. So the first pillar focuses on enabling and strengthening regulation and policy environment to widen access and participation. The second one attempts to expand the identity and payments infrastructure to create the rails that deliver low-cost and interoperable financial services. And the third one encourages participation of market providers to create innovative products and services that serve the poor who can then begin to use these to capture the opportunities that a formal financial system has to offer. If we look at the needs that lie at the core of digital financial inclusion, they are basically to drive down the costs of serving the poor, ensure customer centricity in the delivery of these financial services through protection, grievance redressal, support, and clear information about products that suits the need of the poor. But all of this needs to be done while balancing the new and complex risk landscape that arises because of new players, channels, business models, value chains, and last but not least, the digital risks like cybersecurity, data privacy, fraud, algorithmic bias, etc. So I see a lot of potential for AI applications to make a significant impact in all these areas. And with that, I hand it back to LJ. Thank you so much. Thank you very much. That's a fantastic way to start. I really appreciate the way that you laid out the landscape there. Next up, let's have Alexandra. Thank you. Sure. Thanks so much, LJ. And thank you for the invitation. I to you, it's a pleasure to be here this morning and happy Thanksgiving to those of you who are celebrating it. I'm Alex Rizzi, a senior director of research at the Center for Financial Inclusion and based in the Washington, DC area. Just for a little bit of background on CFI, the Center for Financial Inclusion is a think tank within Axion International. And we work on research, testing solutions and evidence based advocacy to advance financial inclusion and some of the thematic areas where we are engaged and plan to continue to do research around responsible data practices, climate change and financial inclusion, gender and financial inclusion and consumer protection. And on that ladder front, we have about 10 years of experience in consumer protection and standard setting for the inclusive financial services field. And in that area last year, we put out a series of initial recommendations or good practices as it relates to digital credit and financial inclusion and had some elements in there that address some of the opportunities and risks for the use of algorithms and AI in digital credit. And I think Connelly laid out, I think, you know, beautifully sort of the larger picture of the opportunity for AI and financial inclusion, a bankable frontier report called some of these applications practical superpowers. And we absolutely see the potential for AI as time saving, cost saving ways to make better decisions and expand economic opportunity, whether it's for providers who are using innovative underwriting methods to advance products and digital credit to the unbanked or regulators who are using advanced AI and machine learning techniques for fraud detection and market monitoring. But while we feel that the industry has made the, you know, kind of the use case and the opportunity case for the use of AI, at CFI, we're focused on engaging more deeply and deliberately with how these tools are being developed and deployed and how they can be done as responsibly as possible, given their application with low income and vulnerable consumers. And we have a set of research questions and lines of inquiry that we intend to pursue and are pursuing in this area, whether it's how effectively proxy for repayment capacity and affordability. When you have sort of an incomplete financial picture of a potential borrower from how to avoid further entrenching societal biases in the deployment of these systems, and from articulating some of the risks to thinking about what are some tools for accountability for providers and other actors that could be leveraged and used as these technologies really outpace kind of traditional guardrails around data protection and consumer protection. And we don't say this to be naysayers or a wet blanket, but really, because we feel like this is an important lens in an environment when the, you know, as Kamaljit said that as the number of new players are growing as a number of digital kind of immigrants or people who are onboarding into financial services, digital financial services is increasing, especially in the aftermath of, you know, and in the midst of COVID-19 and where the technologies are changing so fast. And we're leveraging CFI's track record and consumer protection in this area. And right now we're doing some research on responsible algorithms and talking to different industry players to better understand the state of practice. And just one of the things that just makes this topic so interesting to me personally is the opportunity to pop my head out of the traditional financial inclusion set of players, because actors across sectors are grappling with these questions and how to design responsible AI systems, whether it's in criminal justice or, or healthcare. And while, you know, what's fair or responsible in criminal justice may look different in the weeds as, as compared to financial inclusion, there's a lot of universal questions around data quality, data sources, monitoring of these systems, how to get inside the black box that I think are universal. And so it's really fun to engage with AI frameworks and data ethicist and thinkers who are outside the financial inclusion space. So it's a pleasure to be here today and look forward to the discussion. Thank you, Alex. And yes, the idea of looking outside of a traditional financial inclusion bubble is absolutely a brilliant reason to to find out from other industries, what's happening. Those of you watching who might want to dig into AI ethics a little bit more might enjoy Stuart Russell, I think he's a Californian professor, who has quite a lot to say on AI and ethics. And I would absolutely recommend you check out some of his thoughts. Thank you, Alex. And now we're on to our final panellists for the day is Rory. Thank you so much, Rory. Off you go. Hi, good afternoon, morning, evening. I'm Rory McMillan. I'm a lawyer, tech lawyer, do a lot of fintech work with governments and with fintech companies in a lot of different countries. And recently, we worked with the ITU as part of the Financial Inclusion Global Initiative studying AI and consumer protection. And there's a report available on the Fiji FIGI website as on the ITU system. I just want to start us off as a lawyer, of course, we're thinking about regulation, regulatory frameworks. And when you think about regulation, you think about risks, what are you trying to regulate for govern for what are the problems? So I'm just going to kick us off with a scenario and see if you can guess if you can log in on to menti.com and put in the number on your screen, which should be showing. Just imagine we're in a country with a history of bias and discrimination against certain population groups. Imagine a health insurance company that gives points, gives points for checking your blood pressure, your blood glucose, your cholesterol, your weight, your waist circumference gives you a wearable band on your wrist that monitors every step and heartbeat. It awards you more points for the number of steps that you take every day or with holds points. If you don't, it requires you to do a minimum number of hours of cardio exercise a week. Measures that against the target cardio rate, which it calculates by subtracting your age from 220. It gives you points and withholds them if you don't meet the minimum exercise target. Reminds you it nudges you if the week is drawing to a close and you're short of the three hours of exercise that you're supposed to have done. Now imagine this health insurance company is also a bank. Every purchase you make using the bank card is monitored and you get points when you buy vegetables and it withholds points when you buy chocolate. The amount of the points that you earn affects the price of your insurance, including your health insurance. Prizes are thrown in like 30% of holiday at the beach and the bank gets access to your device data on your steps, the heartbeat, but also your health data. And it takes all of that into account when it assesses your credit score when you ask for a loan. Where are we? Well, we're in South Africa. This is the vitality health insurance provided by Discovery in South Africa. And so what does this tell us? This tells us that we are in a world in which AI is being used now globally in a wide range of countries and economies. And it is being used where it is allowed to be used without tight restrictions. Data protection laws and restrictions on use of AI are really not yet enforced effectively in South Africa. And so these are the sorts of scenarios that we need to be aware of and thinking about what sorts of risks might be present in terms of the aggregation of data sources. What is going further than privacy should allow? How does one ensure accuracy of the data? How does one avoid bias creeping it? So those are some of the issues that I expect that will come up later on. I don't think one should lead with risks unless the entire panel is about risks, but it's not about that today. But I'm sure we'll get to these later on. So it's a pleasure to be with you and thank you very much for the invitation back to you. Thank you. Thank you, Rory, and all of our panelists. So my goodness, that's a really good start, isn't it? We're going to continue by having a conversation with all of our panelists. So please return back if you were speaking earlier. Alex, please turn on your video camera. And audience, if you have questions, please feel free to put them in the chat. I can see them and I will be asking them as and when it fits. So let's go to our first section today, which is AI and innovations. And we're going to talk, first of all, about the innovations in digital credit and digital lending. Rory, I'm aware you've done quite a lot on mobile financial services and I know that you've just finished your five minutes of talking. But I think it's a really good idea to start our next section. So please can you tell us a little bit more about innovations in digital credit and digital lending? Well, sure. Three kind of fundamental things that a lender needs to wrestle with when deciding to extend credits or not are identifying the borrower accurately, evaluating their credit risk in light of their credit history, and then perhaps taking some sort of collateral and a huge part of the world's population, particularly those in the bottom level of the pyramid, are unable to deliver any of those three. They can't identify themselves simply. They may be lacking foundational identification documents. They may have little or no credit history whatsoever, and may have nothing to put up as collateral. And the benefits that AI has been bringing, I think, as a matter of financial inclusion primarily, are solving these problems, particularly getting around the challenges of identification using tiered KYC sort of approaches that allow an easier way of identifying people using face recognition and then also using data that's aggregated from a lot of their activities. Briefly, just to interrupt, KYC, for those of you watching, is know your customer. It's a set of regulations. Sorry, continue. Yes. And being able to build data history on the most rudimentary of digital activities, which may be as basic as your social network as shown on your telephone calls, and your cash flow is shown in your top ups of those calls, which can then build into further data. That's been a very important driver of use of AI and financial inclusion. AI is also being used in bigger ways. In fact, that is not the driving use of AI in financial services and normally detection, which reduces, improves the ability to detect fraud is very increasingly used natural language processing and LP, they call it, is also being used to communicate with consumers, including in their own language. You see the likes of MTN using Momo in Cote d'Ivoire to be able to do that. And so there are a number of different innovations that are going on. But I think in terms of financial inclusion and reaching those who truly have a blockage to getting in at all to the financial sector, the analytics of credit history and bringing people on board by identifying them better is a real driver. Thank you. Thanks. Cam, which I'm going to come to you actually, because we've got some really interesting stuff coming up around, well, income equality, as well as financial inclusion. And I guess we're going to talk later about privacy. But in terms of you, you've got a world view there with the nature of the Gates Foundation and it's funding several different projects across the world on financial improvement, improvement, inclusion, even. So please, can you tell us your thoughts on digital credit, digital lending? Yeah, thank you, LJ. So we've made some excellent points. The use of AI technologies have many applications ranging from looking at identity and calculating risk or tokenizing collateral. But if we look at digital credit alone and look at applications of AI there, on the one hand, there is no denying that it's a very important entry point for a large number of poor. Digital credit is quite important to onboard them onto the platforms. But then again, a problem that we run into often is over indebtedness. How do we prevent predatory lending? How do we ensure that the loan products being sold to vulnerable populations are actually suitable for them? And remember, these are populations which already have a lot of barriers. You have digital literacy, financial literacy, lack of understanding of what products are being sold. So if you want to separate the product, AI driven products into the front end and the back end, there's a huge mismatch there. What impact it has on poverty alleviation? I think the jury is still out. So like every other technology, I think it's an experiment in progress. And we are getting data from the field from all over the world on what impact AI technologies are having, especially digital credit, in poor sections of the society to see if it has any long lasting effects on poverty alleviation is a question that I frankly don't have the answer to, but I would love to find it out. I think the whole point of having panels like this is to open a conversation. I mean, it's evident that no one's got the answers yet. But I really like the idea of just by having this conversation, we could start some powerful changes and movements. So I appreciate the thoughts there. And I mean, AI is a fascinating area anyway. For example, in loans, you could work out someone's ability to pay back a loan based on an analysis of the social media profiles. And I mean, there's a lot around here, Alex, it feels like I should be bringing you in around this area because we're just touching on consumer privacy here. I'm pretty certain that quite a few people would be happy to give up a little bit of privacy in exchange for a more favorable loan rate. Yeah, it's really interesting the trade-offs that people make. And I think, you know, global surveys show that I think increasingly people are willing to, you know, trade off some of their data to achieve access to a particular service or product. But that doesn't necessarily mean it's a good thing for them all the time. And I did want to you mentioned LGA sort of social media. And I think just this one thing I wanted to touch on was the kind of explosion of alternative data that's being used by these AI systems to, you know, we're talking about digital credit here. So as a source for underwriting and deciding who's a potentially good borrower, and it's just so fascinating to think about how the democratization of mobile phones has just allowed people to create more data trails for themselves and to be able to be visible in some ways to providers and other actors to offer services. But one of the things that we've been thinking about in the use of alternative data, and it's interesting to even kind of how to even define alternative data, we've been talking to different providers for our research and they've said that kind of the Overton window has shifted so much that what might have been considered alternative a few, you know, even a few short years ago is, you know, is increasingly incorporated into underwriting. I think one, you know, FinTech CEO famously said all all data is credit data. But we know we sort of wonder, you know, from a few angles, in terms of data quality, you know, there aren't the same kinds of standards or rigor in ensuring accuracy for some of these alternative data sources. And you know, there have been research that that looked at the kind of the data being traded and purchased through data brokers and found them kind of riddled with inaccuracies. And so, you know, if you think about garbage and garbage out, you know, if if if some of the inferences that are being made about someone's credit worthiness is based on alternative data that's inaccurate or not really painting a full picture, you know, that requires a bit more scrutiny. And we've thought a lot about this also in the context of of the potential for for bias and and as much as the democratization of mobile phones allows for populations who were, you know, perhaps invisible to become more visible, there are certain, you know, societal dynamics that we think might be perpetuated. You know, for instance, if you think about the the click streams or the data trails that are created by different demographics, we know that that women in low and middle income countries are are less likely to own a mobile phone than men less likely to use the use mobile internet less likely to use their phones for kind of sophisticated tasks that might generate a click stream that could then be leveraged by a provider to to offer them a product. And so thinking about who is who who gets visible through these different data trails and who might still be left out. I think is is an important dynamic that that that we're looked, you know, we think should be discussed by the industry. Yeah, that's fascinating insight, actually. And for those who wondering what the Overton window is, it's basically the range of things that are politically OK at any given time. And yes, it does shift. So a really good example be a few years ago, I visited a theme park in the UK. I was filming for the show and they were offering free tickets in exchange for likes on Facebook and the people inside the park were really happy to share their data with this theme park in exchange for one free ticket. And, you know, a few years before that, that would have been unthinkable. So I think there is this opportunity for us to be responsible and start moving the range of acceptability in in an area that we find more comfortable. I'm wondering of just talking a little about the challenges of innovating in this area. How do we get authorities to collaborate? How do you get providers, for example, to do account to account interoperability? These companies are all vying for place on somebody's mobile phone. You know, if you've got mobile phone banking and company one would like you to use their product. How do you get them to interact with company two? This is probably quite a big star. I'm presuming this might be a regulation question. So, Rory, I suspect that you are unmuting as we speak. Yes, well, it's a challenge and what the opportunity of growing financial inclusion is often driven by the commercial profit that providers will achieve. And if they're able to achieve that by maintaining a dominance in the market and we see this with some telecom operators, for instance, who are also offering mobile money and then through that with platformed partnerships with banks, they may be able to corner the market in digital credit. There is then potential risk of them really dominating that and consumers not having an extensive choice. Interoperability really comes into where the payment systems function across whether you can introduce different payment systems allowing money to be transferred between different platforms and different accounts. And that's where you need to bring in competition authorities who need to talk to financial regulators, central banks and sometimes telecom authorities to work out how to impose interoperability requirements and other solution that is being introduced not in very many low income countries because it's very heavy and bureaucratic administratively is open banking where you require the financial institution to make the data it has about its customers available to competing providers. This is really targeting the more established incumbent banks where the data they've accumulated is seen as a barrier for new up and coming up and coming firms. Perhaps it's not so directly relevant to financial inclusion, but that is another mechanism for trying to get competition going. Can I just come back briefly to the privacy point that you raised earlier, which is this trade off between privacy and access to services. I think it's a really interesting question. There was a study that CGAP did. CGAP is the Collaborative Group Against Poverty. Last year, looking at the degree to which people in lower income countries will indeed trade off privacy, meaning allowing access to their data in exchange for access to a product. And they asked a pool of subjects in I think in Bangladesh and Kenya, if I remember correctly, whether they would be prepared firstly to wait an extra period of time to get access to the loan. And they, you know, they ran behavioral economic studies on this live. And they found that a lot of people were willing to wait, the equivalent of in a line in order to get an assurance that their data would not be freely used by the entity they were providing it to, to get a more secure privacy protection. Similarly, some of them were prepared a significant amount were prepared to accept a higher price on their on their loan, which suggests that there is some sensitivity to this, even in, in population groups who you would expect really to be very hungry and just to be concerned about access to the product, you know, the exact sensitivities of this, I think the market is going to be the one that tells us and we'll find out really whether companies start adapting their products to be a bit more privacy respectful and see if it's really a competitive attribute that's worth them pursuing or not. I suspect we shouldn't be too rosy glassed about it. But it's certainly in play. And that's really fascinating to find that, you know, that sense of liberty of the privacy is, you know, it ranks so high in terms of what we feel is acceptable. I mean, it is hard to find a balance and can well, I'm going to bring you in here. How do you find this balance? You don't want to under regulate. You don't want to over regulate. I know that you're funding a project in Egypt, for example, where there is an incredibly low take up of people with bank accounts. So how how do you navigate this elegantly? I think when the challenge is of innovation, it's not that straightforward, even from a technology perspective. Because Alex talked about people are willing to give up a bit of privacy for getting something in return, but that assumes a world in which people understand what will be done with their data. Now, I remember days when cookies just meant being delicious, then cookies started to meet other things. I still don't know how these might be used. So to say that people are willing to give up with their data without understanding what harms may come from it is not really setting the field in their favor. And in addition to the regulatory fragmentation that Rory talked about, even from a technology perspective, there are very strong ethical questions. So for example, explainability when an adverse decision is taken by an AI algorithm, which learns from different sets of data, which even the programmer cannot explain how it arrived at the conclusion. So how does how do we explain it to the consumer as to why that adverse decision was taken? So from a technology perspective, the conversation keeps coming back to data and what to do with the data. So I think this is one of the biggest challenges in innovating in this area, where some of the concepts such as data or privacy, their definitions are not well settled. I mean, we talk about privacy, but there is no universal standardized definition of privacy. We don't know what it means. We do we do talk about protecting it. But we don't talk about you using privacy for empowerment. I think we need to be asking the questions of are the poor impacted more by too much privacy? Or are they paying the cost of of too much protection of privacy? And that line is not very clear in the conversation today, because even technologies are themselves evolving. This whole idea of having an algorithm or a program today is a very different one because today algorithms can travel. If you look at federated learning, they are traveling from place to place, computer to computer, aggregating data and learning in ways that the humans don't understand how they are arriving at their decisions. So I think balancing all of this would require regulators and technologists to start speaking each other's language. So as part of my job at the Gates Foundation, I came in as a technologist, but I had to learn how to speak regulation. And hopefully we can get the regulators to learn how to speak technology so we can come at some joint understanding of what these terms mean computationally and legally. I mean, this what you've just said has really inspired me to think about this idea of privacy as something that's quantifiable in the same way that you've got a happiness index, for example, or you've got currency. I mean, there's this transaction that we as consumers have been unaware of. I think for many, many years, if your email is free, then it's likely that you are not the customer you are the product. We know all about this and with the advent of AI and these sort of black boxes where you don't quite know what goes on in the middle, you just put some, you know, you put some ingredients in one end, and then something that hopefully resembles a cake comes out the other. And it really depends on the ingredients that you put in at the beginning, whether or not the cake actually is to your taste. And I do think that we're sort of on the edge of some really interesting thoughts here on is it possible to quantify the sort of privacy that is, you know, universally acceptable to sort of head briefly back to the Oberton window, just going to leave that there hanging whilst we move on to another section because you all knew we were going to get here, but risk and Rory, I know that this is something that would possibly be something you could spend the whole hour on, but just give us a brief talk about that. Wow, well, privacy is one we've just talked about and another, I think, is back to this notion of explainability. How do you ensure? I mean, Alexandra mentioned earlier, there's this concern about the data that's being drawn into AI systems and how do you how do you ensure the quality of that, the age of it that is kept up to date? We're in a very wild west kind of data world where you've got a lot of data brokers buying selling lists of individuals with different attributes and many of them are out of date. And how much does it matter to the provider that they made simply have pruder data that's being used as training data or running the algorithms when analyzing a person? Well, it all to matter, let's at least remind ourselves it ought to matter because the better their data analytics are, the better their decisions will be when extending credit, the better their non performing loan rates should be, the higher their profits should be. So there ought to be a technical technological and market driver for good and improving quality. However, there are going to be plenty of providers who are still experimenting, haven't got the discipline, the training haven't integrated the ethicists yet, who are who are still producing poor results. And I think as Kamaji said, you know this explainability theme, which Alexander also mentioned at the beginning is is a really important part of accountability. How do you hold providers accountable for the results that are coming out? If it's bias, for example, that's everywhere, you're going to have a major problem. There are, you know, a lot of technologists will tell you there is a trade off between the explainability of a system and it's accuracy. The more accurate you're trying to make it, the more levels of deep learning you're going into, the harder it is going to be to explain that the result to the customer or to the regulator. So they're these trade offs you may need to document decisions made in there. There are some interesting ideas just to close on this point about explainability that are coming out to Sandra Wachter about Mitrstad at the Oxford Internet Institute have come up with the idea of using counterfactuals to explain because why do we care about explainability? What does the customer really need to know? Customer really needs to know what he or she could do differently to get the result they want. And it might be stop smoking, you'll get the health insurance product or if you earned a few more thousand a year, you'd be able to get the loan. One might be able to use some counterfactuals in the system so that the customer could understand better by modifying some variables, how they might be able to get different results. So there is some interesting work being done there. But it's certainly going to be a challenge going forward. Thanks, Rory. Alex, I feel like it would be really good to get your feelings on risk and accountability. Sure. Yeah, I think Rory and Kamaljit have kind of articulated and we've talked a little bit about some of the risks. I guess I wanted to talk a little bit about accountability and how we're thinking about it, especially in this time that as Kamaljit said, the providers and regulators are learning to speak each other's languages. And, you know, it will likely be some time before there are you know, robust kind of regulatory solutions. And in that interim, you know, what what are what are some tools for accountability and where can it come from? And just just on just wanted to speak a bit on the on kind of the black box nature of some of these systems. And I think as we started to approach this topic, we kind of came from the angle of their completely, you know, these these decision systems are completely opaque, especially with some of the machine learning applications in AI, as Rory said, even sometimes the developer themselves, they don't completely understand the logic of how they've gotten to a certain decision. And that, you know, we started it's we started off being a bit intimidated by that, like how can we actually advance, you know, responsible practices. But the way that we've been thinking about it, which is a bit drawn from our work on consumer protection is on kind of different management systems that the provider can have in place beyond the actual algorithm or AI system themselves, in terms of supporting, you know, you know, responsible practices. And that starts from, you know, who who decides what data sources are appropriate, what assumptions are made about what that data is proxying for. And in, you know, in the in the algorithm, who decides what training data sets are appropriate. You know, we've talked to a number of providers who just, you know, because of budget constraints or their startups, you know, they often use the same training data set as they move from market to market, who who monitors the outcomes of the system, who looks for, you know, if there are differential impacts, a lot of those things beyond the actual kind of black box, you know, AI system itself, a lot of those decisions are made by humans are can be documented, discussed, discussed with external actors. And so I think there is a lot that can be touched upon and improved and formalized in terms of documentation around these systems, beyond actually getting to the the secret sauce. And we've seen a few examples of other kind of levers of influence. CFI is just about to put out a paper where we looked at the that drew from about 400 fintechs as part of our the mix CFI inclusive fintech 50 competition. And so we were looking at correlations between external funding sources and data protection measures. And so, for example, 72% of self funded fintechs had informed consent mechanisms in place, but that increased to 81% of those that had seed or angel investors and then 85% of those fintechs that had venture level series investors had informed consent and then taking it even further, only 50% of self funded fintechs gave customers the ability to revoke consent while 59% of seed or angel level fintechs gave their consumers that option and 76% of fintechs that were venture funded gave their consumers the ability to revoke consent. And so this correlation between having external funding and, you know, improved or more robust, not perfect, but, you know, robot one element of it, data protection practices, you know, to us sets up, you know, conversation about what role investors and other industry actors can have in influencing their portfolio, influencing the companies they're working with, you know, maybe as regulatory systems and, you know, measures are being set up. So I think there's something really in both ways, it's reassuring that the projects with informed consent built in are getting more funded. And I also feel that the sense of social responsibility is actually it's it's great optics for the the company. It's almost like a slightly cynical look. But yes, you know, if it looks good, but also it is good, it's AI for good, then it's it's actually really quite amazing statistics that you've you've shown there and hopefully anybody who's watching, who's building systems at the moment will obviously hopefully move towards including informed consent as part of that build. Now I'm aware of the time and we've got a sort of fantastic amount of questions. But before that, I just want to go back to Kamwaljit for just a little bit about some regional focus, some specific examples of what's happening around the world right now. Thank you, L.J. Before I go to specific examples, I want to touch on informed consent a little bit. Of course. Part of the reason that informed consent is so interesting is because it operates like a catch all phrase where basically the assumption is as long as there's informed consent, we are good to go. But if I were to ask the audience today, how many have clicked I agree without ever having reading the document of what they're agreeing to. We will figure out new ways of how 99% approaches 100. And this problem becomes even more pronounced when we are talking about populations who are consenting to things that they don't understand that haven't been explained to them very well. And frankly, some of the things are so complex that even the creators of those things don't have good ways of explaining them. So this whole idea of notice and consent to me doesn't feel like driving accountability. It seems like shifting liabilities. It seems to be a way of saying that, oh, by the way, it's your bad from now on now that you have the informed consent button. So having said that, as part of our work in different countries, we look at the technologies as they proliferate across Africa, across Southeast Asia. This is where most populations who lack an ID are. This is where most populations who lack bank accounts are. And what we find is that a lot of these new technologies going with a lot of fanfare. But there's not that much impact on the ground. So for me, the more interesting part is how do these technologies help drive lower costs? Because at the end of the day, the cake, as you called out, the proof is in the pudding. If these technologies can help lower costs through innovative mechanisms, we should see growth in markets, which are able to serve the poor profitably. And I think that's what I look forward to the most. How would these technologies be integrated in specific systems such that these costs can be lowered? So we already seen a lot of examples, not in digital credit, but let's say in customer service. These chatbot driven systems which look at customer complaints, developments in voice technologies and natural language processing, where people can talk about their grievances and providers and regulators can act upon them. So all of these elements are coming together to lower the cost of the whole system. And I think that's what matters. So if we look at a global focus, I think lowering cost should be a key, key barrier in how do we serve the poor and on board themselves onto formal financial services? It's brilliant. Sorry, go on Alex. Sorry, I just wanted to jump in. Just wanted to say I very much agree, Kamalji with sort of your with the kind of critique of informed consent. And I think, yeah, especially with the kind of Wild West, as we've talked about, about the data ecosystem, sort of, you know, that as a as a tool or sort of as the only tool that providers use to do whatever they want with consumer's data, I think is is insufficient. And also a lot of these regulatory frameworks rely heavily on still rely heavily kind of GDPR and GDPR inspired frameworks in a lot of the markets where we're working that rely heavily on consent. I think my point was just to say that kind of investors and the industry can have can have a lever of influence and incentives on how providers are, you know, are managing AI and algorithms and informed consent was the example, but I think it should go much further in the future for sure. Yeah, it doesn't go far enough. Yeah. Yeah. Those of you watching, there is a website called to sdr.org. I'll put a link in the chat. It's called terms of service didn't read and it will actually explain and rate all of the things that it's going to tell you what you are and aren't agreeing to. So I just stick that on there. I'm really amazed and impressed at the quality of questions that we've got. So I think what I might do is is come in with a few questions now if everybody is happy. Please feel free to jump in with your answers. There are quite a few here. So I'm just going to start with a question from Chaim Shain. Hello. Does anyone offer an education program to explain to the young generation in poor areas how to use and trust these new services? I guess this is a question about disseminating information and and how do we do that? So I was just about to fess up that this is a I think a very important area which doesn't get enough attention because this is one of those things which falls under the collective action problem. Is it whose responsibility is it to do that? Is it the market providers because they want to onboard customers or is it the regulators or is it the customers because they are going to be impacted by the harms? But I think a lot more attention does need to be paid on this education piece. I'll second that. You know the Financial Times last week and launched a new digital literacy program affiliated with another organization and I forget which now that is maybe based on the UK, but I think it'll have global benefit. Now, of course, your average low income person in a poor country is not aware of that. But what they found in some of the studies they did was that something like a third or more of the population, even of the US or the UK could not calculate compound interest, some very basic financial concepts wouldn't be able to tell you were they paying more if the interest rate was on a compound basis or not. Now, we found in work we've done in a number of countries working on consumer protection issues that similarly consumers very often simply don't understand the information they're provided. They may be given, for example, if you're using Mshwari's digital credit in Kenya through the M-Pesa system, you'll be you'll be told what the interest rate is, but people would not know. Are they being charged that on a daily basis, a monthly basis or an annual basis? And and when quoted as something like six percent, it sounds great, except that it turns out that actually that's on such a that song for the loan itself. And if you repay it the same day and then you borrow the next day, which a lot of people do to buy the product for capital for operating capital purposes in the market, they end up paying literally thousands of percentage points of interest on the debt without really realizing an annualized basis, without really realizing that there may be better competitive alternatives. So this education goes to consumer protection, which goes to competition, because if people don't really understand, can't evaluate the the product, then it's very hard for them to switch providers. And so it's going to be very important to getting the costs down and getting a more inclusive product in the end. Yes. And it's important to mention that this is nothing to do with the intelligence of people. It's to do with how much they are given in terms of understanding. I wonder at this point that financial assistance could be given by Alexa or Siri or Bixby or Cortana or these these you know, you could have personalized financial assistance. But then the question is, again, how do we regulate it? What level of responsibility are we expecting people to take for building this? So this is an absolutely fascinating area. Kamaljit, I can see that you are aching to say something here. No, I was when you mentioned Alexa and financial assistance. This is an area that I looked at. If we look at these voice-enabled technologies, they are mostly found in the developed world. If we ask, does Alexa work in languages that poor people speak? The answer would be no. If we look at Alexa, does it understand the accents of people who speak those languages? Again, the answer would be no. So there's a lot of ground to cover on enabling this for populations who need it the most. For someone like me, Alexa is just yet another medium to access the power of computing. But for somebody else, it may be an absolute must-have because they don't have the other interfaces to interact with these computing devices. I mean, this is the area of AI. It's something I could talk about absolutely tons, but I'm going to instead try and get through as many of these questions as possible in the limited amount of time we have left. So thank you for your questions and chats, everybody. It's brilliant reading this at the same time as hearing these incredible insights. Mark says, what is the thinking on optimizing financial inclusion for the SDGs, Sustainable Development Goals, while ensuring consumer protection and competition as central bank digital currencies emerge? Wow, who would like to take that? That's kind of a big compound question, isn't it? I can jump in briefly. It's a new area, so I must confess that I am myself a student of this area. I'm looking at central bank digital currencies and there does seem to be a lot of potential of enhancing financial inclusion with these new central bank digital currencies. But there are a lot of questions to be answered. How how these systems will be implemented? How they will function? What sort of technology choices will be made? How will the equation of accountability, liability be created? And what will the role of the central bank be in this space? Because when we talk about central bank digital currencies, we are talking about a whole new financial system, which is different from how central banks have operated for the last 300 years. I think it's a very ongoing debate whether the technology infrastructure to bring these new technologies such as central bank digital currencies. How long will that take? Like what will be the role of cloud computing to play here? What will be the role of AI technologies? Are central bankers the right parties at the table to look at the technology options for this? So I think a lot of collaboration needs to happen between regulators and technologists for this to become a reality. And there are lots of experiments going on central bank digital currencies in many countries. Uruguay is an example that comes to mind and China as well. Just to clarify, I think is central bank digital currency things like a blockchain like Bitcoin but issued by the state? Is that an accurate read? I'm very new to that bit. I don't know either. So Mark, thank you so much for that question. It has completely bamboozled most of the panel. And this is the sort of thing that we adore here on the AI for Good Summit. So very happy about that. Well, I've only got a minute left. Let me just have a look and see if there's another question that we can manage. I've got, let's see, alpha from Shenzhen, Great Bay Area. Any consideration about AI for Good playing as a dimension between human and environment? Do you know what? There's another question here for also about applications of AI in helping environmental products, projects rather and climate action. So I'm going to combine the two and say, can we look at financial inclusion and climate action in the same breath? Are these two goals kind of possible to merge together? That's that's a difficult that's a difficult one. So huge, huge topic, right? But I suppose one connection might simply be that if financial services are about better use and allocation of resources, you're going to end up with with less waste one way or the other. You ought to end up with people being able to navigate environmental disasters that destroy their homes or something because they'll be able to call on sources of financial support that otherwise might not have been available. In terms of whether it will actually reduce climate change, it's I guess someone out there can probably bring a link, but it's not one that I'm an expert in. Oh, it's quite fascinating how everything's connected. Sorry, Alex, off you go. Yeah, just to say that I think they're definitely and I think for the the populations that we're concerned about, you know, I think the their role in kind of preventing climate change, I think, is not necessarily the the main issue, but sort of building up their resilience in the face of shocks and vulnerabilities that are brought about by climate change and extreme weather events. And I think AI and financial inclusion could absolutely play a role in terms of, you know, whether it's kind of designing products that are fit for, you know, the the weather events or kind of using, I don't know, geospatial data to predict events and the subsequent financial needs of of vulnerable consumers. So I think there are absolutely applications in that in that space that still need to be explored. So and on that note, I have to say, thank you so much for all of your thoughts. I wish we had more time. It feels like we've scratched the surface. So thank you, Rory McMillan from McMillan. No, Rory McMillan, Kamal Jitsing and Alexandra Ritzi. Thank you all to the ITU technical team for keeping us on air and you the audience for your amazing time and attention and questions. I've been L.J. Rich. Thank you. And it's a pleasure now to hand back to the ITU. Thank you. Thank you. Thank you very much, L.J. and thank all our panelists for all your great insights. So the recording of the session will be available soon on our website. And for those who are interested, we invite you to join us for the upcoming AI for Good sessions tomorrow, 27th of November, we have on the go problem owners to problem solvers helping people build AI solutions on their own terms. And you can see right now all the details in our chat. And next week on Monday, we have an AI for Good keynote with Jean Meister, managing partner at Future Workplace on the now and future of using artificial intelligence for human resources. So once again, you can check all the information now in the chat and we invite you to register for the upcoming sessions on our website. I would like to thank, of course, all our partners, sponsors, and Switzerland, our co-conviner for the continuing support. Thank you everyone and hope to see you very soon.