 It's my pleasure to introduce my colleague Dr. Rachel Adams who's a senior research specialist at the HSRC. As our CEO, Professor Soudini has mentioned, there's been a close collaboration with the University of Pretoria and also with the CSIR and Rachel sits on the independent expert panel. I'm trying to read the small writing of the scientific department of science and innovation. Also, on the CSIR is Centre for Artificial Intelligence Research, who work on gender and AI has been cited by the journal Nature, by the New York Times and the Guardian newspapers and also by the United Nations. It's my pleasure to ask you to please join me in welcoming Rachel to the podium. I want to start off by talking about Vladimir Putin. Since 2017 and China has just issued its new generation of artificial intelligence development plan which calls for China to lead the artificial intelligence development of the world. And Vladimir Putin, probably not happy about this. And a few weeks later, he says that whichever nation leads in the development and deployment of artificial intelligence will rule the world. Years later, and following Putin's comments, the UK, Denmark, Germany, Finland, Italy and Japan have all issued AI policies and in every single one of these policies they mention becoming a global leader or THE global leader on artificial intelligence. Last year, the US released the executive order on maintaining American leadership in artificial intelligence. And at the same time, South Africa was establishing its Presidential Commission on the Fourth Industrial Revolution, a commission that was tasked with the specific objective to position South Africa as one of the leading countries in the evolution and development of the four IR, where one of the key proponents is artificial intelligence. So on the one hand, this kind of attitude is vital to resisting what Ian Hogarth is calling AI nationalism, which he warns could result in a new kind of colonialism where those countries that are not at the forefront of the development of AI will depend on those countries who are. But on the other hand, the drive to be globally competitive may result in a failure to prioritize national objectives of the country in favor of global relevance, or at worst, it may result in the sacrificing of individual rights to corporate power and profitability. Indeed, we are surrounded by enormous talk about the promises and the potentials of AI and the Fourth Industrial Revolution to radically change our societies for the better. But we are not considering enough what we lose, what we stand to lose as we replace social interactions and government services with dehumanized digitalizations. And these conversations about what we stand to lose are conversations that we can be having here and that we should be bringing into discussions around the development of policies in this area. So at the moment, I think it's fair to say that there is a lot of talk in this space. There's a lot of talk and there's a lot of politics. But how much governance is going on? And this is precisely the question that we are looking to answer in this series of topical guides. What is the state of the governance of these technologies in these particular fields, where the policing, where the education, where the health, where the migration or city planning, and what are the ways in which this can be strengthened to better protect against exploitation and discrimination and to promote just and equitable societies? So that was our core purpose. Now every time I speak about AI, I'm asked, what does AI mean? Please define it. And I hate defining it because I think we need to have a dynamic definition that encompasses the histories of all those kinds of technologies that came before it and all the different ways in which it becomes represented in cultural products and discussions. The material is just as important as the imaginary in this space. But if we must define it, the European Commission has spoken about a system that displays intelligent behavior by analyzing its environment and taking actions with some degree of autonomy to achieve specific goals. So AI is part of a broader field of algorithmic and automated decision making, where computing devices collect and analyze data to support or make decisions on behalf of humans. And the main reason for automating decision making, as Professor Sudin taught us, or told us this morning, is that it is sought to minimize human error. It is sought to operate more efficiently and more fast and produce greater results in a quicker space of time. But this drive for efficiency and to eliminate human contingency has resulted in biases that cannot be questioned, precisely because they have been produced by a supposedly neutral and objective machine. In the US, we have seen the profoundly discriminatory results of AI systems used to determine bail applications or to determine the ranking of CVs and resumes in hiring practices. In both situations, the algorithms, like all algorithms, are trained on historical data. And so they can only ever reproduce what is going on in society at the moment. So they can only ever reproduce the societal discriminations that people of color face in the criminal justice system, or that women face in hiring practices. Closer to home, we have seen how in South Africa, Cache Paymaster System was awarded a contract by the South African Social Security Agency by SASA to provide social grants to 17 million beneficiaries across South Africa. And what happened was that its parent company, NetOne, took all this highly personal data, data about people's income levels, their gender, their race, their age, their number of dependents, and onsold, profiled them, and onsold predatory financial products. There was many questions raised at the time about what was happening here, whether consent was offered, whether we would ever be able to retrieve this data from Cache Paymaster Systems and NetOne. And all this time, NetOne was saying that they were offering financial services to an undeserved segment of the population. But more critically, what happened was that those people who had signed on to these financial products, their loan agreements would take the money out of their social grant before they even received it. So there was reports of people going to collect their 350-rand child grant and having most of that taken away from these loan agreement terms and being left with nothing. So how do we ensure that there is accountability when a machine has generated a discriminatory outcome? How do we protect our communities against exploitation and the commercialization of their data that has become so profound here in South Africa? And how do we equip our communities with understanding about their data rights and know how to exercise and how to claim these rights? So at the midst of these very pressing questions about the state of the prevalence of artificial intelligence in South Africa, there has been a lot of talk about the fact that artificial intelligence is operating in this kind of unregulated space. Michael and I have done work in various industries where time and time again we are told AI operates in its own kind of space. There's no regulation here. It's doing what it likes, whether it's a social media company or whether it's a financial services provider. But if this was true, if it was really true that AI is operating in an unregulated space, then this would pose a massive threat to the rule of law and to the capacity of the South African sovereign state to protect the rights of its individuals. So we need to interrogate those questions, and we need to interrogate them very seriously. And this is part of what we've done here. Part of what we're trying to show is this is not an unregulated space. It is a space where the regulation and the policy must develop further and where we must do a lot more to consolidate the policies and laws that we currently have and interpret them in ways that strengthens people's rights. There is a lot of work to be done and this policy, this series of topical guides aims to do that. But we are not entirely without laws and policies. We're going to hear from Varsha about the Protection of Personal Information Act, which will hopefully shortly come into effect. We're going to hear from Fadler about the relevant regional and international human rights frameworks to which South Africa is a party. If a corner was here with us today, which unfortunately he couldn't make it because of an issue with his flight this morning, we would learn more about the work of DSI and the CSIR in promoting inclusive use and development of AI. From Kelly, we're going to hear about the applicable provisions within the NDP, the white paper on policing and the white paper on safety and security that seek to promote the use of ICTs and responsibly so within SAPs. Mamaki will speak to the SA South Africa's white paper on e-education and the imperative to use ICTs and schools and the potential fallout for the right to education in this context. Yamkela is going to talk about the 2017 white paper on international migration and the limitations of the provisions relating to biometrics and the securitisation of migration management. Vadanto is going to speak about the national digital health strategy and how far it supports the centricity of the patient and promoting access to healthcare. And in the introduction to the series, which some of you may now have and it certainly will be available here today, we look at the provisions of the African Union Convention on cybersecurity and personal data protection, which calls for the limited reuse of personal data and for the limited use of automated processing. And given that this convention was drafted in 2011, it was really ahead of its time. But we note that there's various laws and policies that are in the process of being developed. We are at different stages in our kind of legislative drafting in this space. The Copyright Act is currently being amended. The Cyber Crimes and Cyber Security Bill with all its flaws is being considered. The Electronic Communications Amendment Bill is also on the table. But alongside all of this, we have a constitution that sets out what we mean by an inclusive society and what the transformational objectives of South Africa are. And that needs to be at the fore. And I think this is really how South Africa could think about leading in this space is by centering our constitutional values, by centering human rights. This is new. This is new to the space of artificial intelligence. We have ethics in the EU. We have massive competitive commercialization in the US and China. But nowhere are any countries really promoting the human rights-based approach to artificial intelligence. And our constitution has done amazing things, not only in setting out this broad scope of human rights, but also in holding the private sector to account for the realization of rights, when they have committed to do so and when the state is unable to fund them. This was precisely the issue with SASA and it's a really remarkable achievement of our constitutional courts. So as we face these questions about how the law apparently cannot keep up with the technology, we must remember that we do have applicable laws and where we do not, the tech needs to slow down and wait for the deliberative, consultative process of lawmaking to take place, which might include things like social impact assessments on new and proposed technologies. In addition, and as Crane said, we really recognize and support the role of our institutions and our supervisory authorities, the competitions commission, for example, in dealing with the monopoly of social media companies, the independent electoral commission in dealing with the threat to democracy and elections that social media may pose, the South African Human Rights Commission and all the work it has done on online hate speech and taking down their speech from social media platforms, the information regulator in developing causal conducts around the processing of personal data in different sectors. We need to equip our courts and our judges to deal with these complex matters and to set precedents and interpret the law in ways that protect people's rights and develop a kind of more stringent application of some of these laws. And then we need to strengthen the understanding of our communities, not only about the potential benefits of these technologies but also about their potential problems and the past to readjust and remedy that are available. And as we think about all of this, we need to think about what are the values we have here in our society that we need to draw on as first principles in which to develop, from which to develop a national response to artificial intelligence that speaks to the needs of our communities and to their wellbeing and to their rights. And these values could include things like non-extraction and non-exploitation of people's data, non-discrimination in a way that recognizes the particular histories of South Africa. Localization, which is why we are here today so we can remember the local context which these technologies need to respond and speak to. Vulnerability, this was a really interesting one that Nadine Moonsam raised with me last week. We need to recognize the vulnerability of the human species. It is not just this all-perfecting creation that can create machines that will eventually outrun us and we need to recognize the vulnerability of the digital archive, something that seems to be presented as something entirely concrete and formidable. We can think about group rights. In the African context, we have community rights which set us apart from the human rights context in Europe and those in the American region and internationally. And a lot of questions around privacy now have talked about group privacy and how people are discriminated against or some kind of harm has come to them because they are discriminated against on a group level. We also need to think about values like hope and hope in new futures and new possibilities. And as researchers, we must recognize our own role here, not only to describe what is going on but begin to imagine and constitute new realities and new futures that are perhaps more just and more equitable and more inclusive. We need to remember multilingualism. Here in this context, so much of AI is in the English language and we do not want it to override the exciting multilingual dimensions of our societies. We need to think about privacy within an African setting which may go beyond the idea of privacy to dignity, to equality, to a autonomy and all that that encompasses. We need to remember knowledge plurality and that data is not truth and we have different ideas and different ways of knowing. We need to remember about access as something that's really important for our society and this comes back to the understanding and ability to use and to claim our rights. And we need to recognize the labor involved in artificial intelligence and where that labor comes from because it's mostly the global south and we so often see these massive systems that operate in unseen ways but the work that is done is done by those whose work is already precarious and rendered vulnerable and we need to recognize that. So in closing, I want to read a quote from the white paper, the DSI's white paper in 2019 on science, technology and innovation. Where it speaks of the success of South Africa's response to the 4IRR which will include ensuring that people are not left behind as society and the economy become more technologically driven will depend on how well we exploit the pivotal role of information and communication technologies and harness the potential of big data. Yeah, I'm not entirely convinced I think we can interrogate that and we must continue to ask these difficult questions in these kinds of spaces and to insist that these developments do not happen at the cost of the rights of those living in South Africa the cost of our indigenous knowledge systems at the cost of those whose livelihoods are already rendered precarious or at the cost of the sovereignty of the South African state to protect the rights of those that live within its borders. We must work against this kind of technological determinism that characterizes the discourse around artificial intelligence and the 4IRR and that describes these developments as inevitable. Catch up will be left behind. We must instead show that technology is a sociological fact. It is a product of the values and the worldviews of those that design and create it and as such it can be designed and developed in different ways that perhaps can more closely align to the transformational objectives of our own country. Thank you.