 Welcome to the Fiji Symposium 2019 here in Cairo, Egypt, where I'm very pleased to be joining the studio today, Mr Rory Macmillan from Macmillan Keck. Rory, welcome to the studio. Thank you. Now I'd like to start off by talking a little bit about data privacy. I know that you've been involved in a report. I wanted to ask you what are the main findings of the security infrastructure and trust working group report on data privacy issues of emerging technologies that you've been involved with? Thank you. Yes, we've been looking at artificial intelligence, big data, machine learning, and how that's being used in digital financial services, and the sorts of dilemmas, policy problems, challenges that it's throwing up. Now the first thing is these technologies are creating great opportunities for financial inclusion by being able to better assess risk of individuals using their alternative data when traditional data is not necessarily available from credit reference bureaus because maybe people have never borrowed before or perhaps they've never submitted enough data to these companies to enable a real risk assessment to be made. So typically lower income populations have not had the benefit of what you and I would be used to having as financial services. And so the great opportunities are arising, but at the same time there are consumer protection and data privacy concerns that need to be addressed. These arise in the work we looked at, three stages of the customer journey, what the customer learns and is asked to agree to when coming on board to take on a financial service. Then secondly, during the journey, what sort of conditions, restrictions apply to the use of their data and what risks arise that need to be regulated. And then lastly, if problems have arisen in the provision of the financial service, what sorts of accountability challenges are faced because of the nature of big data and machine learning. And what we're finding is that firstly a key element of consumer protection and data privacy law is about notifying the consumer of the risks that may face and asking them to consent, particularly to use of personal data. This is very challenging and complex in the situation of big data because typically the providers are looking to collect as much data as they possibly can whereas one of the core principles of data privacy is to minimize it. Data minimization only take what's relevant, proportionate, necessary for what you're going to do. Likewise, one of the goals is to tell consumers the purpose to which their data is going to be put. But with machine learning, very often the algorithms are running their tests and detecting patterns on top of which further machine learning which becomes independent of even the initial coding is occurring so that the provider may not even know what it's going to learn from the machine learning so it's difficult to tell the consumer the purpose to which the data will be put. All of which puts a lot of pressure on this model of the consumer consenting to data being used. It leaves a great imbalance between the consumer who, as we all know, any time we're clicking the box I consent, nobody really knows what they're consenting to. And so there's a need perhaps to look beyond this notion of consumer control over their own data to figuring out how actually better to embed privacy principles throughout the design of these services. In the actual provision of services themselves we're facing challenges around things like ensuring the accuracy of data that goes into machine learning programs, the data that's used to train the algorithms as we put it. How do you check that that is up to date, is accurate, that people's records, if they've maybe paid off alone but that's not reflected in the data going in, that may affect whether they're ever extended credit or not. There are also issues of bias outcomes coming out of big data and machine learning systems which are running off essentially a set of historical data about population groups, where they live you may find that there are geographic parts of a city which have certain historically disadvantaged groups which have been excluded from a lot of services. And as a result the algorithms may end up saying well those are higher risk communities and not extending data to people who may be within them even though they actually would be a good credit risk. Lastly at the end when it comes to redress and accountability one of the big challenges coming out of the big data machine learning world is explainability. If you want to hold somebody accountable for anything you need to ask them to explain how they made the decision. Now when we're looking at automated decision systems which are either granting a loan or insurance product based on the automatic decision making of that system, the deeper the machine learning has run and more complex the patterns it has produced the harder it is to be able to explain that in any kind of rational redress mechanism. Secondly, typically these codes are the very essence of the business plan, they're trade secrets, it's very hard to get them to be explained. So we're looking at solutions to that like potential use of counterfactual models where you may not be able to explain to the consumer exactly how or why they were refused credit but if you were able to say if you said that you earned $30,000 a year, if you had said $40,000 you would have had that credit or you had a certain number of negative things on your insurance record if you didn't have those then you would have had this policy. So there are areas to explore in all of these issues that we're looking at but the challenges are myriad. We're finding that there's a sort of a call for help from the software engineering community like the IEEE and other bodies which are producing very useful attempts to build ethical systems to help work out how should the engineers deal with these things as they're coding. And I think the area where we need to work next is to figure out how to equip the engineers with the right ethical principles and then how to set the right sort of regulatory parameters that allows the innovation to occur to generate these services for the unserved while protecting the consumer at the same time. What about technical standards? Are technical standards needed to address some of these issues? I think there will be. What we're seeing is just mentioning the software engineering community. There are going to need to be standards are very useful because there is intermediary level where industry comes together with some collaboration from policy makers and finds a common way of managing things. Solve a collective action challenge and regulators I think are probably going to struggle too early to impose a very heavy top-down solution to the details of these issues. Industry can do a whole lot to develop in this area for example a set of standards around what would be acceptable inferential analytics by which I mean what would be an acceptable way for machine learning to develop the profiles of people in order to assess their risk. What sorts of attributes of a person, whether it's race, religion, whatever their gender, other attributes that can be sensitive. When is it permissible and when is it not to ever include these in any sort of machine learning system that's trying to assess risk? Are they all off limits? Can they ever be used? I think also standards will be very helpful to build in for processes for ensuring accountability so that you have good documentation systems for decisions that have been made in the way the coding is designed and processes for involving a human in the loop during the coding and then human intervention which is often a part of some of the data protection laws where consumers have grievances and an opportunity to appeal to human being outside that computing system. So there are going to be standards that can be helpful and I think bodies like the ITU and others are going to be very useful in hosting and facilitating some of that work. It seems it's quite a complex arena specifically, particularly because you've got these multi-layers and like you say the machines learning from the machines and in some ways are they going to be acting autonomously so will we be able to regulate for that? I think we're going to have to. I mean the reality is that decisions are being made and the decisions if they're automated coming out of a set of processes ultimately then they're being made by those who coded them. And as a society we have regulation of insurance, we have regulation of banking and some of that regulation is about non-discrimination. Some of it is about making sure that the information that goes into those decisions is accurate, that some effort is made to make sure that consumers are not just being treated arbitrarily. We have throughout consumer protection laws notions of fairness, accountability, transparency, FAT in some jurisdictions and these will necessarily have to play a role in the way the coding works as your AI experts will tell you artificial intelligence does not itself contain values. And so if as a society we want our services to be provided in a manner that respects some of these values we'll necessarily have to give some framework regulation to them. It certainly sounds like there's going to be a lot of work for the legal profession that's for sure this environment. We come up very much behind everyone else. We're still learning the vocabulary and trying to help but we do notice that at the same time the software engineers don't have even the vocabulary of values necessarily taught to them in their computer engineering classes. And this is not surprising. Why should they be? But what is needed now is this dialogue between in fact it's not just the legal profession and regulators. We need philosophers frankly to be in this game trying to help figure out what sorts of values are we trying to protect and how do you balance those against commercial values when the massive collection of data and the massive use of that for advertising and other services is commercially profitable. How to work one's way through that. It's going to require multiple disciplines. Now this symposium is all about financial inclusion and Bill and Melinda Gates Foundation, the World Bank. We're looking at the poorest members of the population as well and trying to access finance digitally. On this side of course I mean you know we want to avoid the computer saying no essentially. I think that's one of the things like you say that they've been particularly disenfranchised before. Are you optimistic for the future of this environment? I think there's room for optimism and there's also room for caution. I think they clearly there is an uptake of services occurring through digital means and this is just tremendous. At the same time some talk in Kenya of financial exclusion as a result of excessive uptake of some of the digital credit services where consumers were not ready and didn't manage repayment well and found themselves blacklisted. So there's always a lot of positive opportunity and good progress happening. But the reality is we've got to be ready and prepared for managing these sort of challenges. We've been hearing comments about people being profiled by their social media interactions and whether they can then access credit because they've said something on Facebook, whether they've bought something on a particular provider through the internet etc. Are those the kind of things that people should be looking out for? Yes you know 25 years ago the New Yorker ran a very famous cartoon a picture of a dog sitting in front of the computer talking to his dog companion saying you know on the internet nobody knows you're a dog. The thing is today everybody they know that you're a dog they know what kind of dog would you like whether you prefer horse meat or beef out of your can they know which is your favorite tree. There's a lot of data on the user through the browsing history through our apps and our phones granting access to a lot of the data not only location but everything that we're doing. There are limits to that but a huge amount of data is collected and that profiling is being done and one of the contrasts and again it's an opportunity as you describe. One of the challenges though is dealing with that when our historic way of handling data about people has been to build formal systems of applications by an individual disclosing a whole lot of data about themselves when applying for a service. And then credit reference bureaus collecting a huge amount of information about individuals and scoring their credit worthiness according to formal data collections systems which have rules about protecting accuracy and clear opportunities for individuals to call up and say what's the information you have about me. You know that's wrong that's I didn't lose that job or I didn't I wasn't fined for that or I wasn't late on that payment getting these things corrected in the big data world. The data is all out there and it's very hard to know it's being purchased it's a projected to be 100 billion dollar industry in about 10 years time. A huge amount ensuring that accuracy is a real challenge and very difficult for consumers really to get proper access to make sure that it's correct. So that's central theme. Finally you've taken the time to be here at this symposium. There's a lot of symposiums around there's a lot of initiatives. What makes this one particularly important for your calendar? It brings together the financial sector regulators, the telecom regulators and policy makers, brings the World Bank, the ITU, Bellarmill and the Gates Foundation, the Bank of International Settlements in a unique sort of combination to talk and that I find tremendous. Well thank you very much for joining us in the studio and hopefully we'll catch up again in the very near future. Thank you. Thank you.