 Ladies and gentlemen, we have now just under an hour and a half until tea time and that time is going to be filled by a very exciting panel. Some of them have been introduced to you before, I'll just mention them again and invite them please to join us up on the stage and they'll be joined by the discussant and then also by Rachel who will be chairing in the absence of our designated chair who has had some issues that have prevented him from being here unfortunately. So I'd like to invite Varsha Sulal from the Information Regulator, Tlaidla Adams from the Human Rights Commission and Alex Kamlinos from Research ICT Africa please to join us on the stage and then also Michael Gastro from the HSRC who is the discussant for the panel, thank you. The protection of personal information is not just the regulator, the Information Regulator's core business, I really believe that if I'm going to utilize the terminology of the Protection of Personal Information Act that it is the responsibility of data subjects, it's the responsibility of responsible parties, the regulator and you know it's something that a multi-stakeholder model can do much to advance the protection of our information. Here I've set out the legislative provisions which have created the protection of personal information and you know you are fully aware of the right to privacy and the right of access to information. Section 40 of the Act basically enumerates the powers of the Information Regulator, however as you know the act is not the substantive portions of the Act are not yet in force and this is problematic for us citizens and we are all data subjects and it's problematic for responsible parties that are processing our information. The PIA function at the moment still vests with the Human Rights Commission and I think it's an important point for us to ponder is balancing the right to privacy with public interest and the right to information particularly now with the COVID virus which has been running rampant throughout the world and then you find China and South Korea have been collating information on where a citizen has been so that they can, your digital footprint as to whether or not you've been in an area where other infected people have been. So there are benefits and I think it's an important balance for us to strike and for us to bear that in mind that we need to have a balance between public interest and the right to privacy. The regulator as we all know the internet is a very foundation of how we live our lives. So the regulator is responsible for ensuring that there are codes of conduct which would actually and I know one of my colleagues mentioned how important it is on sectoral determinations relating to privacy and the codes of conduct that the information regulator is working on at the moment will give a guideline. However they have to be sector specific guidelines on how the protection of personal information should be dealt with. I've mentioned this that we are tasked with developing norms and standards relating to the protection of personal information and I think it's important for us to also bear in mind. It's very often that we look at the protection of personal information as being us having to have our information respected mostly from private organizations. However the state happens to be one of the largest holders of personal information. So the information regulator has a mammoth task ahead of it because it has to look at how private entities and public entities manage personal information. I think it's also important for us to look at data governance policies. The issue of surveillance has been covered. However data rights management where content owners have copyright over digital content also has privacy implications and these are factors which need to be considered moving forward as we start developing policies and codes of conduct. Ethical and moral implications relating to the protection of personal information have to be foremost in our minds when policies are being developed particularly as AI is being developed. And I think the issue of the protection of miners rights relating to their online activity is something that must be steered particularly by the information regulator in this environment. And as I mentioned before, the protection of personal information is more than just the regulator's mandate. It's something that we need to be aware of in our own homes at work and wherever we interact. The issue of technology on development. We were at a conference last week on 4IR and the concept of the fourth industrial revolution which was coined by Klaus Schwab talks about that the essence of it is untold development through technology. However, if I have to reflect on South Africa and where we are in terms of the so-called 4IR, there are many components and it's a term that's banded about particularly by government agencies and politicians. However, I think it's important for us to brace this with the issue of have we addressed literacy, let alone digital literacy. We need to look at the cost of access, the cost of devices. Are we ensuring that we have the foundational elements that are necessary for you to participate in the information age? And once we address that, I think that we're ready to talk about 4IR. But for now, I think that we are still at the infancy stage and it's well and good to bandy about the term. But there's a lot more to it because technology can advance development, but without the necessary elements, it actually means nothing to the man in the street. I think we have covered how technologies can infringe on your personal information. However, technologies can also protect your personal information with blockchain technology, identity management systems, anonymity and cryptography. So these are technologies which can assist you in protecting your personal information. And yet again, I believe that there should be policy guidance on how these technologies are utilized. I think that this is a slide that's been taken from the Information and Privacy Commissioner from Ontario and it was developed in the 1990s. I think it's an important one because it talks about embedding privacy in artificial intelligence from the design phase. And for me, I think that I'm going to read it out. They include being proactive and not reactive. Privacy has a default setting. Privacy is embedded into the design, full functionality, end-to-end security, visibility and transparency and respect for user privacy. I really believe that before AI is developed that these design principles are embedded into the very, before, in the engineering and design process from the get-go, data ethics. It's an area that I believe that the information regulator can make a huge difference in. They have a material role in promoting data ethics. And as we are faced with mass abuse of personal information, data ethics is something that we need to ensure is promoted as a culture. And I'm going to read you a quote from Stephen Kai Yee Wong, which says that data ethics require a cultural change in data protection and seek to assist organizations in fully reaping the benefits of the digital economy while protecting and respecting the fundamental rights, including the right to privacy of individuals. So I believe that this is a unitary process. We need to try and ensure that it's obviously going to be a huge benefit as soon as the act is put into place. But this is a responsibility that we all need to be vigilant and mindful of. Thank you very much. Thank you, Varsha. You covered so much. And thank you for bringing up the issue around children's rights and what this means for children and minors, because part of what we want to do here today is think about what other guides are needed and who can help be involved in this and give us their expertise. So I think something around children would be really important. I'd like to hand over to Frantler Adams, who is the senior researcher for Parliamentary and International Affairs at the South African Human Rights Commission, and has been working in the area of internet rights and responsibilities for quite some time. Thank you, Frantler. Hi, good morning, everybody. Thank you, Rachel, for that introduction. And as you mentioned, I'm with the South African Human Rights Commission. And for those of you that might have bumped into me last week, some of what I will say will be a repetition. So I apologize to some of the colleagues in advance. But last week, we hosted our very first 4IR and Human Rights Dialogue or conference. And it was quite a momentous occasion for the Commission, because notwithstanding my individual personal lobbying for the last seven years, I think Alex would even remember from IGF days of trying to get a human rights narrative, even within the Human Rights Commission. It took a long time for the lights to go on. And so now, finally, we've kind of caught up. But allow me to start off and say that I didn't prepare a presentation, because I only have 10 minutes. So I'm going to speed through it and then leave the rest for a dialogue. But on high school, and I don't know how many of you, and maybe it will give away some of our ages, many of us had a set work book by a one George Orwell, 1984. And I remember reading that book and being blown away with a concept that big brother is watching you. Because in my mind, that meant that my older brother was constantly going to be checking me out and knowing where I go. And there were so many thought concepts in there. I mean, the thought police, they were monitoring the way you think, what you do, that I couldn't capture that as a high school student. I just couldn't absorb what it was all about. But one quote I do remember, and I wrote it down as I was sitting there, is, if you want to keep a secret, you must also hide it from yourself. And I think that underpins the discussions around privacy, particularly since we're moving into this era where George Orwell's book might not be too far away, this idea that big brother is indeed watching us, albeit in cyberspace. So I'm going to start off just very briefly with a mandate of the South African Human Rights Commission just to give context to our discussion. And we are a constitutional body. We've been established by the Constitution, Section 184, as an institution supporting constitutional democracy. We have a further mandate in terms of the South African Human Rights Commission Act, which gives us extensive powers. And for the benefit of today, powers which relate to advising government on any human rights policies or laws that need reform and approaching parliament on any issue where we feel is of human rights concern. That's just a very basic interpretation. At the same time, we have an international mandate under the Paris Principles, which is essentially a set of principles which was established around 1993 when the internet was just booming, which sets out the powers, privileges, functions, et cetera, very basically of what national human rights institutions should be doing. And the commission is regarded as the NHRI, so the National Human Rights Institution, of South Africa. And we have A status, which means we can speak to any issue around any human rights issue at the international level, at the UN, at the Human Rights Council. And currently, we're in the process of lobbying for further recognition at the UN General Assembly, as well as the high-level political forum. So in relation to the sustainable development goals. On the SDGs, I think it's also, we need to note that we have 10 years left. And this is now the decade of action, 2020 to 2030. And there's been a lot of talk around what we should be doing, not just as an NHRI, but generally as a society in the sectors, as academia, as civil society, and what we can do to push for the delivery of the SDGs and all of the indicators and goals. And noting that the issue that we're speaking about today is, in fact, one of the SDGs. But I'm going to go a little bit back in terms of our own constitutional or rather the role that we've played as a commission. And I reflected on this and thought, a couple of years ago, where were we at when we were talking about privacy? And we were speaking very much in terms of privacy as we understand it, constitutionally speaking, not in the digital space. And we had written submissions part of our parallel NHRI reports to the UN under the International Covenant on Civil and Political Rights, where we noted concerns around privacy for persons in detention. So basically, persons who were incarcerated or had been subjected to severe overcrowding and who lacked the enjoyment, if I can put it that way, of the right to privacy. And then, of course, with that came a lot of brutality and inmate violence. And then, of course, the officials as well. And we'd written our concerns to the committee, to which there were several recommendations that came out. At the same time, we noted concerns around the use of bucket toilets as a violation of the right to privacy, where we see in many parts of the country still there is no access to running water. And on that, let me just do a side note. We also found that more people have access to a digital, possibly smart phone now than they did to running water or toilet, which is quite remarkable in this day and age. But that was essentially where we were at as a commission, reflecting on how do we push for the right to privacy. And this was before the IR days. And how do we influence the narrative internationally as well as at the parliamentary level, which is part of our mandate. And over time, we were increasingly getting more and more complaints which shifted away from the traditional complaints that we were familiar with to one where we found that people were complaining about things happening online, about we got complaints of people coming in saying, and this was even before we had all the apps available, I don't know. Every day, there's money coming off my phone. The air time is going up so quickly. So what did you do? I don't know. I saw an ad on TV, and they said, SMS your name to 33, so, so, so, so. And you will get updates every day. But what that actually meant was that it was also taking a lot of information, albeit very limited then, that people had on their phone as well as billing them every day for having signed up unknowingly. So we had complaints like this. And quite honestly, we didn't know how to address it and what to do because it was so new age. At the same time, issues around employment, persons, emails being intercepted, someone having gone into the files of someone else on their computer and then companies saying, well, you know, it's your work for us, it's our computer, so therefore, it's not really private. Whatever you do on your computer is our business as well. And we found ourselves having to look at international and the regional standards and actually engage with other NHRIs on the continent and globally to get advices around what exactly and what should the response be of a national human rights institution. On that note, I also wanted to say that last week during our discussions, I think it was Allison Tilly who mentioned how privacy is increasingly becoming quite the commodity, or rather, trading your privacy is becoming quite a commodity where you, and you spoke earlier about medical records, I think it is Professor who spoke about medical records and how we are actually willingly doing so and trading our information and I myself am a victim of that in exchange for a lesser jump fee. So I'm willing to give all my info up, so I only need to swipe three times a month at the Virgin Active, which is a bit problematic for me because I don't actually work out all the time. I just run to swipe and then hope that they'll get it on the system. Okay, she did go to Jim, but not knowing that I was not actually there. But yeah, so back to the human rights issues. So increasingly we were finding complaints around people expressing concern that the information is being used, that in fact, we had an example of a young girl, a South African woman, whose photo was being used internationally from everything to selling sanitary towels to makeup and just the face of sadness. And how did we respond to that? Or how should we respond? Again, trying to engage with the internet service providers to have this woman's photo taken off was a mission in and of itself. Trying to speak with big businesses and explain the human rights implication is difficult. And I think it's precisely for engagements like this that we need to ensure in all your, the topical guides that you're establishing and even from our end that we have a human rights-based approach to the fourth industrial revolution that we look at it from a lens of people first. Human rights approach, not the detached humanoid or you're working on a system where there's a bot on the other side that doesn't understand content or all the context, the way that you're saying things or your tone, where we need to bring the human element into this revolution and as data is, rather as data's getting more and more out there. We have concerns also at the human rights level then around the right to privacy and so far as Rachel already mentioned the credit bureau or rather SASA, not the credit bureau, that's also another issue but the SASA issue with net one and the selling of information and the commission was quite involved in that process. I won't get into the details around that but it was of grave concern to us because I think at the full national level it was our first wake up call. That year was this big data that was being sold to the most vulnerable in society who had nothing and of course businesses didn't think anything of taking the money off the account of a grandmother who went to draw her monthly pension and so it brought us to business and human rights responsible business practices. The role of business in technology, the role of these big companies that we, our Googles and Facebooks and what exactly it is that they should be doing. Sorry, I'm just flipping through so I stay on track. Okay, Rachel mentioned earlier also and I think it was you about the children and just as a side note as well the African Charter on the Rights and Welfare of the Child is the only African instrument that actually recognizes the child's right to privacy. The African Charter itself doesn't do so but there have been recommendations of which there has been to South Africa as well during its inaugural appearance before the African Committee or other side of the African Commission where they have issued recommendations noting concern around the infrastructures that we have relating to privacy. One of which was also to please expedite the establishment of the information regulator. We are in the process as a commission of handing over the full mandate to the information regulator but I think there are just some challenges and technicalities. They already share office space with us. Yay, the third floor, I did office in Johannesburg. So we're getting there but we're not fully, we haven't done a full handover as yet. But yeah, so there is and we've seen internationally as well in terms of the regional framework. So we have the African Charter. There's been several resolutions that have been passed as well. We see it at the international level with the international, under the International Covenant and Civil and Political Rights and it's associated 3D body committee who's come out with a lot more guidelines and advices to states and recommendations on what states should be doing to enhance the right to privacy particularly now as we're moving into the digital era. And I think South Africa more so we are, we are trying to come up to speed with that. I am, am I doing for time? Still okay, okay. I also want to add since we speak in an international law that constitutionally speaking and for those lawyers in the room and when matters, when you do take matters to court around this they, I often find that among my colleagues there is the forgotten notion that there is a constitutional obligation as well for the courts to consider international law. And that's very, very important particularly if we are not where we should be as a state in terms of our legislative and policy framework. So section 39 one B particularly states that when interpreting the Bill of Rights noting that Article 14 is the right to privacy in the constitution. So when interpreting the Bill of Rights courts must consider international law. Similarly when under section 233 of the constitution when interpreting legislation states must prefer a reasonable interpretation obviously that is more in line with international and regional. Laws. As I mentioned, so we'd look at things around the African Commission on the Rights and Welfare of the Child in so far as privacy, Article 17 under the ICCPR, the recommendations within the South African state in 2016 by the African Commission on Human and People's Rights, where as I mentioned they said that we should accelerate the establishment of the information, the information regulating all and amend our legislation and so on still hasn't happened as well as amend our cybersecurity and cyber crimes still in progress, I mean five years for five years later. And in so far as digital literacy, part of our campaign particularly at the commission level after I think last week seminar in particularly in particular rather was that we realize the need for digital literacy across the board not just, I mean kids know more than adults let's be honest but within our institution and within government as well that there's a definite need amongst officials to come to the party to be taught even judges for that matter. Can you imagine going to court and appearing before a judge and explaining for example that you had and I use the example of the humanoid as well that you had purchased a humanoid, essentially a sex robot and it has been the cause for the breakdown of your marriage. And you take that to a judge and you explain that it was technology that came between you and your wife. I can't imagine the responses that the judge would have but I do think that there's definitely a need for judicial training and maybe that's something you could actually put down as part of an action point. Similarly issues around vulnerable persons or persons with disabilities how far have we come to make the internet actually safe or not safe accessible in the true meaning of the word accessibility within a human rights framework for persons with disabilities with visual and audio impairments, et cetera. I think there's a lot to be done particularly for us to influence the human rights dialogue and ensure that it comes through and carries through through all the work that we're going to be doing around data and AI but I'm going to pause at that point I think my time is up and I'm open to any further discussion, thank you. Thank you, Fetla. I hope a judge doesn't have to deal with a humanoid coming between a man and a wife but my work has been on gender and AI and there was this report from UNESCO last year that said that by 2020 we will be having more conversations with our AI assistants than we will with our spouses. So perhaps these... I hope not. Perhaps these are very real questions that we need to think about and thank you for bringing us back to the intersectionality of the right to privacy in this country and how it intersects with other rights and I know Kelly's going to be speaking to that a little bit more this afternoon with regards to the right of freedom of movement. I'd like to welcome Alex Komlinas who is a researcher with the Research ICT Africa Group who have been doing enormous work across the continent in this area. Alex, thank you. Thanks very much. It's a pleasure to be here today. I'd like to start off by firstly reiterating some points from Varsha and Fadla. But firstly, we talk about fourth industrial revolution if we're having challenges with electricity, if we're having challenges with literacy, if we're having challenges with the state statistical agencies, yeah, maybe we can focus on the third industrial revolution first. I would even posit that the fourth doesn't exist. So the second was denoted... The second was marked by the inter-Atlantic telegraph and then the third was marked by the advent of the internet. So yeah, the first captured the small... The world getting smaller and global transatlantic intercontinental communications. So yeah, fourth industrial revolution, artificial intelligence, it can be a bit of a distraction if you're focusing on getting ICT policies right for the last 10, 20 years and implementing the policies if you've been looking at ICT for development. So I like the sectoral approach of the HSRC and I would ask, I would beg policy makers to ask themselves, why do we need an AI policy? Like is this not addressed in privacy policy? Is this not addressed in skills policy? So yeah, why do we need it and is it possibly a distraction? So just for a reality check about ICT access and usage in South Africa, research ICT Africa, we do a household survey. We did 2008, 2012 and 2017. We have seven African countries and a number of Latin American countries. It's a representative household survey. Yeah, so we found 53% of South Africans are online. Less than half of the rural population, so less than that is connected. You're more likely to have connectivity if you are earning over 7,000 around a month, if you're a male and if you're living in an urban area. Now, laptop usage, so households with a laptop in South Africa, we have 17% of households with a laptop surveyed, 9% with a desktop. Tablet 15% and internet connectivity in the household 11%. Now interestingly, all of those figures, excluding mobile use, have gone down since the last survey. So yeah, this makes sense because devices are getting cheaper, mobile phones are proliferating and yeah, I guess ADSL connections are dying to a certain extent while people change to mobile phones. So yeah, we, on top of this, we have 47% of individuals having a smartphone. 72% access the internet by mobile phone. So has anyone here ever made their CV on a mobile phone? Done any data science on a mobile phone? Okay, so there's a qualitative difference in access as well. Mobile phones, yeah, they make us more consumers, sure we can share and we can create like audio visual content but yeah, it has to be augmented by something. And they also lend us more to our private data being exploited as has been mentioned by Professor Kanthachi. So yeah, that's the framework just a grounding in reality. And now I'll move on to issues related to AI in general. So we have a network of researchers, the regional academic network on IT policy and we looked at AI issues on the continent. So the first in African countries, the first obstacle to kind of AI research is there's lots of examples coming from outside of the continent and there's increasing examples of AI applications in the continent but there's a little bit of a lack of them. My fear is in the next two years there'll be more than enough. So yeah, we have to keep ahead of that and we also have to sometimes we discuss a lot of the things that are happening and it's not actually AI. There's a tendency to focus on having an AI policy, to having AI ethics and guidelines for the use of AI and so now we have over a hundred documents with ethics norms and principles on artificial intelligence. I think it's useful to step back and remind ourselves that we have regimes and human rights frameworks that give us ethical direction and give us ethics norms and principles to apply to AI. So the danger of this growing body of AI specific principles is the first, it's often led by the private sector and it's often voluntary principles and we can lose focus into where we already have attempts at regulation. We have to build our existing capacities. So yeah, I like the fact that Professor Kantakshi had a very old slide about surveillance and personal information and I like the fact that we've been reminded that we have an information regulator that is nascent and needs support. And then there's also been an issue about data. So coming up in constant discussions are people saying that data is the new oil, that the Western companies in the Global North have all this data to work with, like where is our data if we can only magically have access to this data, then we can springboard ourselves into AI. So yeah, I worry about that on a number of levels. The first is yeah, sure data can be the new oil. Have you ever tried to put crude oil in your car? It's not very good for the car. So the data has to be prepared in a certain way that is actually useful for an AI system. And then when you're saying, oh well, Western corporations have more data, we would like more data, like what are you actually asking for? If you're primarily talking about the data that we're giving up by using social networks and going on our mobile phones, I'm gonna have to have a long think about whether I wanna give that data to Facebook for free or give that data to a government. And so yeah, I think we have to kind of be imaginative about what we have to be realistic about what this data we're lasting for is actually going to take us to. So I've mentioned the four IR being a bit of a distraction. We have a number of top officials on Statistics South Africa threatening to resign. They have a council, I believe. There's unfilled posts. We also have huge challenges in implementing our spatial data infrastructures. So we have appointed data custodians who are meant to facilitate the sharing of spatial data. Spatial data offers huge opportunities for artificial intelligence. Some of our big AI startups that are successful overseas are actually working with spatial data and aerial photography, GIS data. So if data is the lifeblood of AI, how are you going to generate data that helps AI applications? Is this gonna be open data? Where does personal data stand here? And how does data sharing going to look? And then also like our well-organized small data that helps us to interpret the world as statistical data. What are those capacities? So again, AI becomes a distraction here. You're gonna have initiatives for doing stuff with data for the sake of it generating data. You wanna train people in like kids in four IR skills when we lack reading comprehension. So yeah, putting the question there of data. And yeah, I'm running out of time here, but the last point is that there's also seems to be on the continent this idea that it is a race in AI that we have to keep up with China. We have to keep up with other African countries. And actually AI is not a race. So it's, I think one of the machine learning community is one of the most vehemently open science communities I have encountered. If you publish something in machine learning, you drop it on a site called Arshiv. The technologies to create AI applications are open source. Sure, they've been created by large companies, but PyTorch has been open sourced by Facebook. TensorFlow has been open sourced by Google. So yeah, I don't really understand the race mentality in such a open ecosystem. So yeah, I think we also need to think about that and what it means to be a leader in AI. Thank you. Thank you so much. It's incredibly sobering to hear some of those statistics around ICT access, and I think it's really important to bear in mind. The debate about human rights for ethics is something that's coming up again and again. And you're right when you say that it gives corporations a free reign, whereas human rights has a history of jurisprudence and of binding obligations on powerful actors. So let's think about what might be more powerful. So I'd like to welcome Dr. Michael Grastrow, who's a chief research specialist here at the HSRC, and is on the Presidential Commission for the Fourth Industrial Revolution to act as our discussant, so to respond to some of the presentations that have come before. And then afterwards we're gonna open up for questions and comments, and hopefully we'll have a good, just under half an hour to do so. Thank you. Thanks, Rachel, and thanks very much to our presenters for their lively and engaging presentations that have added a lot, I think, to the discussion so far. I'm just gonna pop a few questions to our speakers before we hand over to the floor. Firstly, to Varsha, thanks very much for going over the role of the Information Regulator. This is a critical role, whether we're talking about data privacy or we're talking about AI. I think it's important for all interested parties to understand what this role is and how it works. My first question is a practical one. Given how fast things are changing, how do you see the timeline for the public access to information function to be shifted over from the Human Rights Commission into the Information Regulator? And the second one is you mentioned that a mechanism for the protection of personal information is gonna emerge over time. And I was wondering what the considerations are, if they're irony with respect to artificial intelligence, if they're specific kind of parameters that you're thinking about putting in place. My third question is about engagement because the AI policy that's under discussion is multifaceted. It can't possibly be developed by one party because it's got legal aspects, it's got policy aspects, it's got aspects in terms of the public, and it's got aspects in terms of capabilities for research and development in terms of trade and industry. So there's a lot of parties in this discussion. And I was wondering what your thing is in terms of the Information Regulator engaging with these parties so that the policy that comes out at the end aligns with the mandate of the Information Regulator with South African law in that area. And finally, we've had this discussion, what I call information predators, people that or firms that pull information out of South Africa, monetize it and sell it back to us essentially. And I was wondering if any mechanisms are under consideration to protect us from that or if not protect us at least somehow, even the playing field. Then for Fadila, thanks very much for your overview of the work and the thinking of the Human Rights Commission in this area. And I absolutely support the idea of having this discussion about artificial intelligence and data privacy being human rights centric. I mean, it's a cornerstone of our constitution and our moral code. So I think that's really important because some of the discussions that exist in this area are centered around profitability and economic competitiveness. And that's two fundamentally different things and that can take policy and discourses in very different directions. So I think it's really valuable that we can center this around human rights. And I was just wondering from your talk which was mostly about data privacy and your analysis of that in terms of the human rights. If you had any further thoughts specific to artificial intelligence because that raises a whole bunch of distinct data privacy issues. And then also in terms of section 39.1b whether you had looked at any international laws in relation to AI and maybe brought that into your thinking if you could share that with us. And then to Alex, thanks for bringing a counter narrative to our discussion. It's always valuable and it's good to critique and to apply some pressure to the kinds of discourses that are circulating now. And to try and separate the hype from the reality is always a good thing. So my first question is about this kind of focus on AI versus looking at existing mechanisms that are not labeled AI. We've got data privacy mechanisms, information regulation mechanisms, market mechanisms, policies. But I'd like to just put to you that there are some guidelines which are useful. There's OECD principles on artificial intelligence which are largely adopted by the G20. Other multilateral institutions have set out basic principles of fairness and privacy and prevention of bias and so on. And I'd like to reflect on the usefulness of those kinds of principles in terms of providing some guidance. Then also artificial intelligence uses data in new ways. So there are existing laws and policies and mechanisms in place. But I was wondering if we don't need to check whether the existing frameworks are sufficient. Are existing frameworks sufficient to prevent machine learning systems from creating bias in realities data? Are they sufficient for safeguarding our data privacy and security? And if they're not, then we need to take steps. And finally, the question of AI being a race, I agree that it's perhaps a questionable term. A race is a kind of zero sum gain. The other aspects of a race, which I think we can maybe question, but the dynamic which I'd place a question mark on is what kind of race could it be? And it's true that it's not necessarily a race for intellectual property. I mean, we have off the shelf machine learning applications that you don't have to develop. You just can use them. You can pull them off the shelf and use them. So in that sense, it's not an IP race, but maybe it's a race for capabilities to develop the skills needed to implement AI applications and use cases and solutions and capabilities within government to actually understand what AI is and what it has the potential to do and implement its uses for the public good. And my conceptualization of the AI race is then a race to develop the capabilities to use it for social good. And I was wondering if that conceptualization of a race would be helpful? Can you hear me? Yes, I'm good. Okay, great. With regards to the handing over of a pire, that would happen as soon as the act, the substantive portions of papaya come into effect. And I think it's section one on four, which will enable the handover to occur. And I think in the interim, we do have, and I think it's important for me to share this information because behind the scenes, we're trying to work at ensuring that there is a capable handover and we can transition the responsibilities with ease. So there is a memorandum of agreement that has been signed between the Human Rights Commission and ourselves wherein the issues relating to the systems. And I think there has been a lot of development on the complaints handling processes and there have been manuals which have been prepared. So we're looking at, and I think also, there has to be a lot of updates which need to happen with regards to pire. So we're looking at those conversations and trying to map a way forward before the act comes into place. And we're really hoping that this could happen sooner rather than later for obvious reasons. The issues on policy development relating to AI, we are in the process of, we have drafted guidelines on developing codes of conduct. And I mentioned in my presentation that I think it's important that there are sectoral determinations relating to codes of conduct because the health sector and the telecommunication sector, their codes of conduct are going to be different. Their needs are going to be different. Some of the checks and balances would be different. And I think that's the way within which we're going to foster some sort of political, I mean, policy guidance on how we would look at managing AI. At the moment, we haven't developed any specific policy, but that together with the issue on children's rights, I think these are critical issues that we have to look at. For instance, tariff marketing is something that we're looking at and developing a code of conduct relating to that. And you very rightly pointed out that it has to be a collaborative process. And I also alluded to that in my presentation that there has to be a multi-stakeholder approach when developing guidelines and also in just developing ethics around how personal information is handled in Deltworth. And I think there was another question on transport information flows. Was it relating to that? There is provision for how transport information flows will be dealt with in Papua. And there is a challenge, particularly in Africa, because there are countries which don't have data protection regulation. And in that instance, there is going to be an issue because we try to look at some sort of equivalency so that you could get equal protection, but sadly there are many jurisdictions within the African continent which don't have data protection laws. However, I think it's also necessary for the information regulator to look at developing some sort of way forward relating to transport information flows. And that has to happen as we develop as an institute. Thank you, Varsha. I think that issue is really, really important because both the General Data Protection Regulation in the EU and Papua set this provision where you cannot share personal data with a country that does not have as strong a data protection framework, which means that it creates really large inhibitors for engaging together cross-border trade, economic growth, and for countries like those in the African region, it prevents, makes a huge barrier between how we can engage with the European Union as it means that companies here cannot process the data of any European subject in ways that can be really, really detrimental. And it takes the data protection authorities about two or three years to decide whether a country or a jurisdiction has an adequate data protection framework. Fadla, do you want to comment? Thanks so much, Rachel. Funny on that point, if I can add, last week or two weeks ago, rather, I attended the African Regional Forum on Sustainable Development. And one of the delegates in my group was a woman from Denmark. Well, she's of British nationality, but from Denmark. And she had to send an email and we were a group of about 25. And she stopped the off-way through and said, actually, I can't send this to all of you. And I said, we were like, what are we waiting for? I'm gonna break it up because if I do, this will infringe on the EU laws around data and privacy. So I'm gonna send it five at a time. And I found that quite bizarre that she was so mindful and she said, I don't want to be picked on or have an issue when I get back home. That's my question. Thank you, Dr. Gastro. The short answer is, as a commission, no. We haven't done anything in terms of operationalizing the work in terms of our recognition around the role that technology plays and how it interfaces with human rights, particularly within the 4IRR space. I think late last year, when I circulated the report by the special rapporteur on extreme poverty, Dr. Professor Philip Alston. And the report spoke to the widening digital divide and what we call the digital welfare state. And circulating that internally within the commission really made a lot of colleagues sit up and realize that this is dawning upon us and we need to do something. So precisely the conference last week was our first in hopefully many to come and there was recognition at the closing of the conference that the intention is with the new financial year to set up possibly a unit looking particularly at technology and human rights. We found particularly around issues on things like automated decision making, just having that decision or discussion rather, worth colleagues, we have to change the way we think, the way we see things, the way we as practitioners, legal or human rights practitioners actually work and operate in this space. So even on section 39, 1B, in so far as at least for my monitoring and work, I haven't seen any direct linkages in so far as using international best practice relating to AI or technology being applied in our courts. Yes, there's been other examples where we've drawn from international and regional recommendations and law and there's a lot of case law on that but we yet to see for me at least in my monitoring a matter that's been taken to court to such an extent where all these issues have come through where we have brought AI in or the human rights implications of AI to be great if someone could do so and challenge it at the court, take it all the way up to the concourt and look at even then the amendment of PAYA to bring it more in line or the digital age. You're giving me time, I will stop there. Capabilities to use AI for social good. So yeah, I think it should be reframed like that. I'm not a fan of competitive sports but still even the urgency can be a bit dangerous. So there've been two, perhaps three AI winters. AI is a 60 year old technology and there's been cycles in which lots of money has been put into AI. There's been like expectations that don't meet reality. Money gets burned, funders, states are particularly sour about this. It happened in the UK and don't wanna fund AI for the next 10 years. So when we heard these words like semantic web about a decade ago about the new way Google was doing search and that would have been called AI if people didn't have negative memories about that. So I think definitely there is an urgency to develop the capacities for social good and also to develop the policy framework whereby we can use AI for social good. But urgency but the sake of it, it has been a couple of moratoriums and cities and regions around the US on AI and facial recognition. So we simply aren't ready to use facial recognition for what people believe. There's arguments against it. Are we ready to use facial recognition for social good? Are we ready to have like AI teach our kids in schools? So people are afraid of that and there's also pushing back is good and questioning the urgency of these things are also good. And then if you have a race, let's like make the goals modular and do it one small step at a time. Like we could get electricity right. We could sort out the curriculum and then also I can encourage people in this room to perhaps attend a deep learning endeavor. Like we have the talent. We have a lot of dynamism in AI startups. A lot of South Africans putting themselves out there and those skills are there. But yeah, unless you have like better mathematical skills throughout society and like more widespread literacy then there are also people being left behind in the race. So yeah, I'd push back on race just for the sake of it and yeah, as a matter of principle and secondly, any research ICT Africa we've observed that all new technologies come with them new digital inequalities. So they have a tendency to create new inequalities and to exacerbate existing inequalities. And yes, I think the mechanisms and principles have been useful and a part of multi-stakeholder discussion is also to raise awareness and to set agendas. I think the danger is there's too many principles at the moment. So the OECD principles are quite good. The EU principles have just come out but which principles are you going to pick from? And also what about more rights-based principles? There's a very interesting paper. It's called A Mulching Solution. And so the biggest conference on fairness, accountability and transparency in AI is called Fairness Accountability Transparency and two years ago there was a paper called A Mulching Solution and what the Mulching Solution did is it had a system that was proposed whereby old people with bad credit ratings were identified and then turned into mulch which could then be put on plant beds and help the plants grow. And what was actually done was an analysis of the fairness of the algorithm and selecting the old people who's accountable. Is it transparent? So there's this discourse created that is actually so strong that I think the point of this paper was that people are losing attention of the purpose of the employment of the AI. So if it's never a good idea to do facial recognition before kids go into school, then no amount of fairness, accountability and transparency is gonna solve that. So I think the danger is also that there's completely new discourses for how to address these ethical issues and I don't know if they're helpful. That was very helpful. Thank you. Can we please see who has questions because I'm slightly nervous we're running out of time. So my name's Andrew Renz. I'm also a research ICD Africa. Alex is not responsible for anything I say. I particularly wanna ask Varsha. You spoke about privacy by design. So would you support the repeal of section 30 of the Electronic Communications and Transactions Act? Currently, I'll remind you what it says since we haven't all read it since a while back 2002. Currently it requires every cryptographer provided in South Africa to register with government. Obviously if we want every internet of things product, every camera, every cell phone that is sold in South Africa to include encryption. We want every software program that is used in South Africa including open source. I don't know how somebody provides open source has to register to include encryption to give us privacy. Then surely this section is horribly outdated. It's criminal offense, two years if you don't register as an encryption provider. I'm pretty sure a signal which lots of people use on their phones, WhatsApp have not registered. So it's a dead letter but at the same time I'm sure it's intimidating for South African entrepreneurs. Thank you. From a privacy perspective, I must stand with you on that because I feel that in as much as there is a need to ensure that certain standards are adhered to, I think that also when we're looking at innovation, how much does this over regulatory stance, STEMI innovation and I think we need to balance that thought. Okay. So my next question is for all the panelists and there was a suggestion, PIA needs to be updated and it seems to me the fundamental problem with PIA for the digital era is it refers to records and not data. And that, do you agree or do we want it to keep on talking about records? Do you want to talk about data? Will it create an onerous burden on the respondents if it's data? I think that they have to be enhancements to ensure that PIA is quite an old piece of legislation and I think that it needs to be updated to ensure that it's reflective of where we are technologically from a policy perspective. So I believe that there should be amendments and enhancements to the legislation to accommodate for that. Thanks so much. And then also just to add on that, in terms of the PIA amendment because particularly from our side is the Human Rights Commission. Part of our mandate is also then to push for legislative reform or the need for amendments. So that would speak to the Electronic Communications Transactions Act and the provision you mentioned as well as PIA if the public approaches us and says that these are outdated pieces of legislation, needs to be brought in line with the constitution or whatever the case may be. Definitely from our end, it's initiatives that we will then push in terms of the spaces that we operate in. But from the PIA amendment perspective, I know that the Committee on Economic, Social and Cultural Rights also raised concerns around the legislative framework relating to PIA, particularly also as it interfaces with socioeconomic rights. I do know that for the next reporting cycle that civil society is already mobilizing now to ensure that when South Africa, the government appears before that UN treaty body that issues around PIA comes in if it hasn't been amended. Chances are probably won't be within the next four years. So there's a lot happening and definitely a need to bring it up to speed with the digital era. We have from the commissions and even though we are one foot in, one foot out and kind of handing over to the IR, we still have privacy under our radar. And so particularly in so far as legislative reform, it's something that has been flagged also for the new financial year based on our engagements then with civil society actors and the level of interest as well in pushing for the same. Alex, do you want to add? You're cool, okay. I think just to add on Alison, I'm very glad because the question of whether, how we understand records under PIA, some people have argued that data is included under the definition of records. So Alison, Tilly, Colin, D'Archer, both people who have worked on this app for many years. So Alison. The real criteria that troubles people is that often what they want is information, not necessarily what has been recorded, but what people know. And what people know has not been written down in any form, whether electronic or in any other way. And it's difficult to access because of that. I would like to just perhaps direct us towards the politics, which I think we perhaps haven't looked at head on. And there's a reason for policy status over the last 10 years. And we were fighting off the secrecy bill. Then we were fighting off the cyber crimes bill. Then we were dealing with surveillance. So I think we've been dealing with a state that has a very particular attitude towards information and the management of it, which has frankly not been that of most of us in civil society. And probably not, I would venture to suggest in the institutions that are respecting and protecting and upholding the constitution. So in that hiatus, many, many issues have advanced and we have not kept up. We do now have an opportunity I think to re-look at many of these issues and try to get the system of policy reform moving again what I would like and perhaps this will only be dealt with after tea, but the question of how these issues raised here are going to be turned around into policy processes in government and parliament. We tend to rely on the courts too heavily because the courts have been functioning but I don't think it's an excuse for us to evade the fact that there should have been processes in parliament, that the executive should have been running with many of these issues and have not been. I think what I would want to ask is what we are going to do in the various capacities that we're in to make sure that these issues are addressed and there are many of them. Are you able to filter those out into discussions whether formally or informally so that we're not falling too far behind, not missing the opportunity to keep up and influence policies as they're being developed at the moment, for example, by private sector actors? Thanks. No, Michael, you're discussing on the panel but I thought given your involvement in the four hour commission, if there was any sense that you had that it was bringing in, I suppose, more of a social and perhaps more specifically privacy-related topics as part of those discussions and actually if either of you from the Information Regulator Human Rights Commission have been involved in any of those processes at all. And Paul, I wonder whether we can ask you about Alison's question. How are we going to take this to the policymakers? How are you going to ensure that it gets into law and policy? Well, I was going to ask you the same thing and I think we discussed this yesterday. So I don't want to pick on anyone in the audience and we'll chat during the break but there are some specific people that it would be nice to engage with on these things. So we didn't mention the relationship to DSI but that is obviously one piece of a much bigger puzzle and then there are lots of other intersecting legislation in the space, ECA, the bill that went back around electronic communications, a whole range of legislation that has been in stasis and potentially needs to be reformed and looked into. So that's why I asked the four hour commission question and I think because in theory it has a slightly broader multi-sector approach and I think we all understand where it's also coming from so we need to be realistic about it. But then also, I suppose also at lower levels in the bureaucracy, I think there's a need for us to also to work more constructively with specific groups, for example, DCDT, working on data policy and data strategy where they're doing a lot of groundwork on those kind of things that go up and down in the policy process. So I think we need to look at those avenues where we can be quite practical. So I mean, there's lots of entry points and opportunities for us and that's something for us to discuss and hopefully this gives us some of the direction that we need. The commission's draft report is not yet published so there's some constraints about what I can say. What I can say is that we've spoken to a very broad range of stakeholders, private sector, public sector, research, civil society, all kinds. On these exact issues. And at present, AI is one of the central areas of attention for the commission. The debates around AI have fallen into two broad areas. The one is essentially about capability building and the other is about ethics. And both of those have legal implications but more so the ethics. And even the discussion about capability building has to do with human development ultimately. So there the focus is on building skills, building R&D capabilities, which is critical. If you can't develop the applications then you stop at that point. And then working those into different sectors because different sectors are fundamentally different dynamics. So building that into the health sector or the agricultural sector or military or manufacturing. So the discussion is really about how to accomplish that. On the human right side, all the topics which have been under discussion here today are on the table. And the question of bias is perhaps of the most serious concern. I think because of the type of discourse we have in South Africa where AI has racial bias and gender bias, we have to counter that. It has to take a very high priority and the discussions are about how to accomplish that. It's and the obstacles to it and the way around the obstacles. And obviously the other topics are there under discussion, data privacy, conforming to the puppy acts, consent for use of data, concerns about facial recognition and in all the debates, there's a kind of synthesis that needs to happen. There needs to be a balance. And that's where the discussions take a long time and go over into your tea break and you come back the next time and you start again. Facial recognition might help security services prevent crime and save lives. But facial recognition can also violate privacy. And then the kind of conceptual discussion is how do you balance these two apples and oranges to find some kind of equation or some kind of resolution? They're very intricate discussions. And I think the final outcome will be viewed in the report which will be disseminated by the commission. Just for a closing comment. I don't think so. The word bias was mentioned and I was asked if that's a question if we're gonna have sufficient mechanisms for preventing bias. And yeah, I think humans are bias and some biases are arbitrarily created by machines. So, but yeah, I'm encouraged that the major issues being discussed by the presidential commission are the ethical and bias and policy issues. I think there have to be ethical considerations when you're looking at how you're gonna be dealing with these issues, particularly around the privacy issues. And I think so too for the Human Rights Commission. And I think maybe some of the issues are that have have all sectors being approached, have views being ventilated. What is the legislation saying, even if it's proposed and not fully in effect, for instance, I appear, because they have the conditions for the lawful processing of personal information being considered because the fact that the act is not in effect as yet is just a question of when will it be in effect? But all of our policy development processes should be mindful, for instance, on the conditions of lawful processing of personal information. Sorry, just a point on that. But to answer Gary, I think that what we are doing at the moment is that we are completely in an infant stage with the development of our guidelines. So we have published guidelines on the development of codes of conduct for industry and what we have done thus far is we've had a consultative process where we engaged with stakeholders, both from the public and private sector, on the guidelines and we are at the moment deliberating the comments that we've received. And once that deliberation has occurred, it will take us a few months because it's about 200 odd pages of comments that we are working through. And then as we proceed, we will be looking at sector-specific codes of conduct and that will definitely involve stakeholders. So we would go through the normal consultative process and look at how we develop, look at comments, look at public engagement. So there will definitely be engagement, sorry, was that me? There will definitely be engagement from as many sectors as possible for us to have a refined document that incorporates as many comments as possible. Thank you. Fela, do you want to give a quick? Okay. I think I did respond, but if anyone else didn't hear. So as far as I'm aware and generally, any engagements particularly with the executive and at the parliamentary level, my office is privy to that. But to my knowledge, there's been no engagements with the Human Rights Commission and the Presidential Commission. Unless you know something, I don't. But that's obviously something that we would then take forward at a very senior level within the commission. Either way, we do have quarterly engagements. Well, on paper, we have quarterly engagements with relevant government departments and the ministers. And we are due to appear before the Portfolio Committee on Justice in April. We don't have the date yet, but sometime in April. And we often raise key issues where we feel that we need to have been included in all engagements, which was perhaps an oversight on the part of the executive where they didn't pull us in. So definitely we'd use that opportunity. Rachel, if I may, I know Kelly's up next, but I just want to say I forgot to mention in our intro about our expanded mandate under the National Preventive Mechanism, which is under the optional protocol for the Convention Against Torture. And we, as the commission, has been designated as the National Preventive Mechanism to monitor places where persons are deprived of their liberty. So not just prisons, any place where you're deprived of their liberty. And AI is coming up more and more and around cameras and facial recognition and biometrics and police or rather correctional services, surveilling places of detention. I just wanted to add that because I didn't say that in my intro and then I'm done at that point. Thanks.