 Hi, FossAsia 2020. It's great to be here, albeit remote. So, hope you're all well. I hope everybody's keeping well. We'd like to start this talk by taking a picture. If you don't want to be in this picture, then please hide your face. Hold something in front of your face. Okay. Thank you very much. FossAsia 2025. It's great to be back. I'm proud to see everybody here. Again, as we've done in past talks, we'd like to take a picture. If you don't want yourself in this picture, then please hold something in front of your face. Okay, let's go. Thank you very much. So, what a year it's been. So, we've had the 28th Amendment in America, the Trump Amendment. He's in office until the party gets sufficient votes to overtake. That will trigger an election. But this has to be a physical nationwide poll that overtakes sentiment analysis across all social media and news outlets and whatever the government wants to do. It has to overtake the AI before an election is triggered. Otherwise, he's still the president. COVID-25, predicted by AI and tracked through wearable tech, was stopped in its tracks after only 1,000 cases and no deaths. Singapore continues to act as a bridge between areas. It's acting as an area where you can get access to the balkanized web, so you can get log-ons to the Chinese web, the Russian web, the EU web, the American web. And one of the interesting things about the American web is that it's got very aggressive AI, and it's e-commerce. You log on with bank details, and AI predicts what it is you want and what it is you need, sends it to you and deducts payments before you do anything. And you then got to start to fight to get refunds through online AI courts. We've had drone swarms at exits of transport hubs using facial recognition technology. These are tracking people with temperatures or other known suspects, etc. I'm not sure what term they use, but it's starting to track people across lots of towns. It's quite interesting seeing the swarms fly around, though. So who are we? We are data kind, a global nonprofit that harnesses the power of data science and AI in the service of humanity as a strap line. They do pro bono data work through meetups and all sorts of other events that they organize for charities and nonprofits. It starts to get the charities and nonprofits interested in being data driven. And it starts to introduce people who are data scientists and interested in data and data driven things introduced to the NGOs and the charities. There are six chapters around the world, two in America, one in the UK, one in India, and one here in Singapore. So the results from the pictures that we've been taking. So we've been analyzing the data through clean, de-bias data. And we've been looking at the audience through all sorts of other things through social media and various other things that we can get data allowed to have the data of. And we are, as the slide says, we've been using de-biased homomorphic encryption. It's securely stored. It's all auditable. And only a few people have access to the unencrypted data. This goal is to maximize ticket sales. So maximize the revenue that FOSSAsia got. And so we can then plow that back into the community and give away free tickets, early birth tickets, bursaries and prizes. The models were developed and were assessed by a panel against the Singapore framework for AI and the data sharing agreement. We're going to present a chart on that because we've been having some issues with that. And now we've got to change things, especially now as a Singapore AI framework is regulation. We've got to have a look at maybe re-engineering some of those things. So here's some results. Divided the community up into these groups, into five groups. We normalized the attendance to the 2019 attendance and we've been tracking that. We've had to go back through the data and track all that. What we've been seeing is that some groups have been impacted and others have gone up in the numbers. And now we've got to go through all this data and say, see why that was happening? We may be discovering proxies. And now the AI framework is regulation. We need to start to fix this and re-engineer some of the work that's been done. Had we been looking for proxies earlier, then maybe we could have avoided some of this bias and have fairer access. But we can't go back and correct it now. But what we can do is review the strategy going forward. So thanks to FosAsia 2020. So I'd like to take a picture, but maybe we shouldn't because we don't know what kind of things we're going to start to introduce into this. So let's do a bit of a revisit. Early in 2019, the PDPA published a framework and they invited comments. Looking at this, especially the contributors, we thought there wasn't enough involvement or protection of civil society. So data kind, people from effective altruism and interested individuals got together and wrote a response paragraph by paragraph suggesting improvement. It didn't take away the substance of the framework, but we thought it added dimensions of work towards protections, awareness and for protections and awareness. And for the PDPA, these improvements could potentially stop litigation. This was in the form of, we called ourselves a, what did we call ourselves, a working group, non-profit working group for AI. Some of the stuff got into the version two of the paper and we were credited with that. But before we get to AI and responsible AI, I think a lot of this starts with data and this is just the latest graphic that indicates in some of the major areas how much data is going to, going away from all our interactions because we've moved increasingly from the physical to the digital and we generate vast amounts of data with every action that we take. It pours onto servers with greater volume and greater speeds than ever before. And we give up this data mostly for free. It is this data resourced or a fire hose, whatever you want to call it, that feeds the machines that generate revenue with new, ever more accurate recommendation engines. In and of itself, this may not be really a problem, more of an irritation with constant interruptions that pop up adverts. And to some, it seems slightly creepy when you're on another computer and it seems to know what you're looking at on a different computer. But everything is data these days. To my mind, one of the emerging battlegrounds is voice. The more we speak to things, the more we record, more data, we are giving up to make more accurate engines. That's a good thing, isn't it? We want the models to become more accurate. Isn't it even more irritating when we're recommended hair products when we're bald and various mistakes like that? But it seems to me that we live in this age of constant cognitive dissonance. We know we're giving away data and that may be a bad idea. It's invasive. But there's so much cool stuff out there and so much of it for free. But it is how this data is taken, how we can convince it of it and what is done with that data that can be part of the problem. Shashana Zuboff in her book, Surveillance Capitalism, talks about data being expropriated since there is no apparent law against it in many cases. Data is taken without permission, worked into product and sold back to us. They seem to work on the idea that it's better to ask forgiveness for permission. And so if you look at products like Street View, that was developed in this way. Google drove around to lots of pictures since public spaces are free. And it's been brilliant. It's been a boon to many people. But we are being habituated into giving away a lot and then paying to get it back. And if we object to this, it feels like a losing battle. Things like the right to be forgotten to have potentially damaging data removed is being rolled back across the world and this can cause problems. The case of Nader Soctane in Iran shows what can happen when mistakes are made. The name was similar to one associated with a shooting during the election in 2009. A link was made after searching for a similar name and her face, Nader's face, was then posted as the face of a martyr. The authorities approached her, asking her to debunk the killing of fake news she was to appear in person. She wouldn't do that. And what she was then told that she could be charged with treatment, threatened with imprisonment, even death, and eventually she became a refugee. And many people think this is just the beginning. A lot of this will get worse and worse as IoT and wearables become more ubiquitous, gathering data about how we react to things and mistakes will be made. You could say that this has been done for a long time, but these days with the models used in AI systems, they are getting better at doing it at scale and at speed. I'm presuming that that will go for the mistakes and some of the mistakes as well. Things like the five facts of personality model has been used to get to what we post on social media for quite a while. We've used what we post, but how we post the language used, length of words and other parameters. And this is met with some success in predicting what demographic people are in. Now imagine that when so much more emotion is transmitted without a voice, our manner, our deportment in a video stream, tie that up with directly measurable parameters from wearables, and we are setting ourselves up to be manipulated big time. But then you have a look at things and how much more information is gathered from watching someone react to a video connected with micro-emotions, surprise anger, confusion, sympathy. Most of this event survey that simply says, I liked it. Couple that with heartbeat, blood pressure, breathing rate, or rousal anxiety states. You have something that says I can be manipulated far more successfully by applying micro-stimulating images, text to the sound, to herd and to nudge with a longer term goal in mind. And we're not always sure what those goals are. Maybe it's not an issue when something pops up that's really cool that we didn't know we wanted, but when democracy is at stake and money buys more airtime and social media time than the next guy, then inequalities in the world are amplified. It might be okay if we're nudged towards a healthier lifestyle. We like bad sugary food and drink. So our health can be nudged towards better things. But if I'm sold, remedies to cure my bad behavior or insurance premiums go up, then we can create socio-economic inequalities. And there are people who can't afford to get better and might be trapped in a policy that determines the food they get. And that can trap people further as poor diet can lead to poor academic achievement in no way to get out of this trap. And on social media, experiments have been done on millions of people and published in academic journals that now show marked shift in voting behavior with A-B type testing in social media streams. And these experiments do not have to follow the same ethical standards that academia does. In the US, I believe it's the common rule introduced after many manipulative experiments in the 1960s. But academics are now knocking on the doors of some companies to do this kind of research. Do they live with a cognitive dissidence and pretend they're doing good or are they unethical researchers? That's a question. These kind of things. And with respect to our privacy, giving our data away could be a violation of privacy. But what does it feel like to have privacy violated? There's research that talks about that we behave as if our personal physical space is being violated. And how do we deal with this? There are cultural norms physically that we can learn and we can start to adhere to and respect in different cultural areas. And we need to do that online as well somehow. But to help with part of that, we need control of our data, who has it, who takes it and what they do with it. In the voice battleground, companies have been taken to task about snooping on our daily interactions, televisions, digital assistants, toys even. And some of this data is being transcribed by humans, so people are reading this. And for those who say, well, you've got nothing to hide, do you really want your private conversations listened to, transcribed and potentially sold on? How do you react if somebody is following you around all the time and making notes on everything that you do? I don't think you would react very well, so why do you do it? Would you have all your phone records, bank records, location records published publicly? Not the same as a PPI that's often talked about in privacy terms. We all have something to hide. We all have something we want to keep private. Acutely, and as a mapping out our living spaces and passing us on smart beds recording and analysing sleep patterns remotely. Children's toys have been co-opted and worse than that, not only are they recording and passing on your child's conversations with toys, but they're also asking personal questions. I think some of these toys have been banned in certain areas. Questions about where they live, where they can't be right. And strange conditions are put into T's and C's that accompany these devices. No upgrades, may not work as expected, unless we can take your data. I'm not sure of the validity of some of these things. It doesn't seem to make sense to me. So once our privacy is violated, once our data is expropriated, do we have any say? Are we then being manipulated by increasingly accurate AI that nudges us towards a herd behavior? Is that the kind of society we want? Some of this is tried to be reflected in some of the frameworks that are being published around the world. And my colleague, Raymond, will now take this up with him. Thank you, Jeremy, for this excellent introduction to the AI governance framework. For my part of the talk, I will focus on the big picture trends of AI governance proposed by governments, nonprofits and universities all over the world to mitigate the negative impacts on people and society. We use the paper, the global landscape of AI ethics guidelines that was published in July 2019 and contains 84 published AI governance frameworks. These frameworks cover the following broad themes. Transparency, justice and fairness, non-maleficence, responsibility, privacy, freedom and autonomy, trust, benevolence, sustainability, dignity, solidarity. I will cover only the first seven in this talk as the last four are less common. For each of the principles listed, I provide an example where the principle has either a positive or a negative outcome. Spoiler alert, they are mostly negative. Framework development has mostly occurred in Europe and North America, but a small number of frameworks have been developed in Asia, including Singapore. Since the paper has been accepted, two more important frameworks have also emerged from China. The first theme I'm going to cover is transparency. This is also part of explainability, interpretability and disclosure. This generally boils down to using machine learning or statistical models that can explain the reason behind the decisions that were made. A positive example of this are credit scores in the U.S. that are used to determine illegibility for loans and credit cards. Everyone has a right to receive a credit report that explains all the information that goes into the score like previous mortgage, utility payments, total debt and other things like that. This transparency reduces the likelihood of miscommunication and gives customers an idea on how to improve their scores. The next theme is justice and fairness. It also covers inclusion, diversity and accessibility. This is going to be a negative example. Some years back a large e-commerce company built a tool to automate recruiting decisions. Give the system 100 resumes and it would give you the top five candidates. Fortunately, the algorithm was very biased against women. It turned out that the root cause of this was that men were overrepresented among hires in the historical data set that was used to train the model. Fortunately, the system was decommissioned after a year once the issue was realized. As Jeremy mentioned previously, this would be really bad for your company if you did this. The next theme is non-maleficence which covers prevention and security. Like the Hippocratic oath taken by medical doctors, it is the principle of first do no harm. Just this year, dozens of women in Singapore had their images stolen from social media sites and doctored using the T-mute app. The app allows the user to replace porn star faces in pornographic images with faces of targeted women. Such doctored images were subsequently uploaded to sex forums. Note that creating forwarding and possessing these kinds of images is a form of sexual harassment and is covered in the recently passed protection from harassment act. Following the outcry, the app was removed from the app store but the harm to these women has already been done. The theme of responsibility also covers accountability, liability and acting with integrity. In 2019, 650K Rohingya had to flee Myanmar for Bangladesh falling prosecution. A lot of that violence was fuelled by hate speech spread on social media. In Myanmar, social media is in fact synonymous with the internet due to the high penetration rate. What level of responsibility does the social media platform bear or hate speech propagate through its content recommendation system? In many legal jurisdictions, the answer is quite a bit. Especially in the last couple of years it has passed anti-fake news laws. To that credit, the social media platform in question has significantly increased the number of Burmese-speaking content moderators to deal with such issues but still this may not be enough. Especially considering our current COVID-19 pandemic situation and all the fake news and cures being passed around on the internet. Privacy is also an important part of AI governance as well as data governance. Again, in 2019, we saw the year that news broke about Cambridge Analytica harvesting data from millions of users to feed their political ad campaigns. This was very much against the terms and conditions of service for the social media platform when they took their data from. Not only did this part a political firestorm but Cambridge Analytica was eventually shuttered in a hail of controversy for abusing the use of this personal data. Finally, we cover freedom and autonomy which includes things like consent, self-determination and empowerment. Make sure your customers know what they are getting into and have the freedom to decide what's in their best interest. A positive example would be making your terms of service easily understandable. The picture contains the terms of service of a number of very well known online platforms. As you can see, they are mind-numbingly long. Don't do this. Ask for consent as and when you need to instead of getting it upfront for everything possible. As January mentioned previously don't let organisations push you around to maximise profits. In summary, whichever part of the world you are in, there are frameworks coming into force. Many of them will become laws and regulations in the near term. The faster we start considering them in our AI architectures, the more likely that we don't run a power when they eventually become the law of the land. Thank you very much. I'll hand it over back to Jeremy to close out. Thank you very much, Ryan. Some really good stuff in there. Hopefully it's giving people a lot to think about. I think the people here can help. This can be a little bit of a call to action. Free open source software is a responsible movement out of which I think will come protections and ways of working to help mitigate bias, prejudice, and wrong outcomes. As individuals you can question if the outcomes and motivations in applying AI align with the values of the organisation. Does the organisation have any values? Building monitors and checks at regular interviews in the pipeline, including over time may be in certain areas so that outcomes are within expected boundaries. I know that's not always possible in deep neural nets and things like that. But building alerts when you're finding out bias so that an investigation can be done. Perhaps these investigations can be done with external and internal ethical review boards involving legal and social scholars and other areas not just the technical people. Maybe we can set up cities and juries where AI is going to be implemented by public bodies to gauge the temperature of people upon which these manipulations will be happening. Do your work well, do what you do best and do your work responsibly. Be brave and be countered. Thank you very much. We're from data kind. We love data. But we need to be careful with it. Thank you very much. Thank you. See ya. Bye.