 Okay, welcome our next speaker, Christina Rosa. Okay, thank you. Thank you for the introduction. I'll start, well, I'm Christina and I'm going to speak to you about artificial intelligence and the right to be forgotten. First, I'm going to start with a reference from Cicero's book regarding Temi Stockley's choice. He made an interesting choice back in the antiquity and he said something like, if he were to choose between the right to master his capacity of remembering things or to master the capacity of memorizing things, he chose back then to master the capacity of forgetting things because remembering things in his opinion was something not so important and it was more important to be able to forget. And his reasons back then were something like because everything that he had ever seen or heard stuck in his memory and because nothing that flowed into this man's mind ever flowed out again. So these two reasons that he had back then, I believe they apply also to our current status now even though it dates back to the antiquity. Regarding to the paradox of forgetting, if I'm going to tell you, for example, please fuzz them room, don't think about a pink elephant. What are you going to do? Are you not going to think about a pink elephant? I think you're actually going to think about that elephant. So this is exactly how our psychological mind is working and this is a major issue when dealing with how we address these issues regarding technology and the GDPR as you have said before. Well, this paradox of forgetting made Humberto Eco saying that an R-sublivionalis does not exist. So currently we are having in the status quo so many techniques of remembering things and we buy different stuff and we are trying to memorize courses at university and so on but regarding forgetting, we don't really have a clear technique. How exactly do we get out of our head something that entered there? I mean, if we're thinking about traumas and stuff like that, things get even more problematic. So Humberto Eco stated in one of his works that an R-sublivionalis does not exist. Then why exactly are we talking about a right to be forgotten? What is it? So I'm going to tell you first a legal definition. I'm just going to throw it out there. You guys have more an IT background. So this is how a legal definition looks like. It's the substantial right of oblivion and the rather procedural right to erasure the right from data protection. Now, the right of an individual to be forgotten means that he has the possibility, the legal possibility of obtaining automatically or by request. The deletion of personal information that is no longer relevant or useful that was posted online by himself or by a third party. Even if initially it was legally posted. So I will say that in the spirit of the right to be forgotten, we should look more at the antiquity and how exactly people forget psychologically and how exactly are we forgetting in our society. Then to think about the technique of deleting something or to think about how exactly are we going to make an AI system forget something, which I will actually argue that is not really possible. So first, let's see the European perspective on the right to be forgotten. We have here three major things that happened during the past. We have the Directive 95, which was dated in 1995 on the protection of individuals with regard to the processing of personal data. And this directive did not found itself very, was not very used at that time in 95. It started to get more important in 2012 when there was this case law, Google Spain and Google Incorporated, and Mr. Mario Costellago Gonzalez. He practically used this directive in order to make the European Court of Justice to offer us the definition of what is the right to be forgotten, which prior didn't actually exist very clearly. And then we have the famous GDPR that my colleague first talked about, where in Article 17 actually gets written a right to be, a right to erasure and a right to be forgotten. Now, I'm going to speak first about what happened in 2012 with that case law. Mr. Gonzalez noticed that every time he was going on Google and he was Googling his name, there was this problem of him appearing next to two articles from La Vanguardia, where he, back in 1998, we had some money issues and he had to put to list his house to an auction. So he believed that that information is not relevant anymore, that that was quitted. So at that point he said that that information shouldn't be there. So it's a problem, the fact that every time somebody was Googling his name, the information gets there. So he asked first to the newspaper saying that, hey, you should delete those two articles that are there. The newspaper denied his request. Then he went further and he went to this agency, AAPD, that protects his privacy. And this agency said as well that it's not possible to delete those two articles because those two articles were initially legally posted online. So there is no problem, we have freedom of expression. And then he went to the national court, the Spain National Court. And in 2012, the Spain National Court had an issue confronting this request because it was actually very weird that somebody would want to delete information. It was actually the first time this happened at that point. And confronted with this issue, the Spanish court went to the European Court of Justice and asked three questions. The answer to the first question was the fact that between suing Google Spain and suing Google Incorporated, we should actually sue Google Incorporated, the corporation from the US. The second answer was regarding the fact that the search engine qualifies as a data controller, as it is mentioned in the prior directive from 1995 that I was telling you about. And the third aspect was actually the definition that currently exists of the right to be forgotten. You can see it on the blackboard. And regarding the GDPR definition, this is going to be the legal definition starting from May this year. I'm not going to forget, I'm just going to roll this like this. I'm not going to read, sorry. Okay. Now, the notion of the right to be forgotten, even though it's from now, we speak now about it, it's actually quite old, and it dates back to France, to Germany, to 19th century. We can find it also in Italy, also in Spain, also in Germany, in different forms of course. Now, here are some statistics on the definition, on the deletion for GDPR compliance. This statistic was conducted on 750 IT companies, and this is one of their answers that I believe is very relevant. If your company receives a request based on the right to be forgotten, what is the method you would use to delete the content? So one of their answers was basic deletion. And basic deletion means actually taking the file and dragging it to the recycle beam. So this would not actually mean to delete something, in fact, and it would not offer the possibility in an audit, for example, to offer a proof of erasure. Now, this is another statistic about the methods they are planning to use that currently exist in order to delete something. And in AI systems, from this thing, the only thing that could work, in my opinion at least, I'm not a tech person, is just to burn it. Or I don't know, use acid or something like that. This is basically physical destruction, because we cannot use cryptographical erasure and we cannot use data erasure in order to make an AI system actually not use a certain information that is inside the AI. Now, I was telling you that back in 2012, the Spanish National Court had issues dealing with Mr. Gonzales' request. After 2012, something miraculously happened. Suddenly, a massive flow of requests were made to Google in the name of the right to be forgotten. And the source of this statistic is the transparency reports, which is Google. And the most popular targets for the right to be forgotten are Facebook search results followed by YouTube and Twitter. And this was basically the answer that Google had in order to protect, to the case law that happened in 2012. They hired basically dozens of lawyers and they asked them to deal with requests based in the name of the right to be forgotten. And at the national level, a thing that I would like to mention is France, which had the first regulatory agency, the CNIL, and they requested Google to extend removals globally, not just to Google France, but also to Google.com. And this thing, which is very interesting, society's answer was the fact that people actually made manifestations and were against the fact that this is happening because they were advocates of freedom of expression. And these are some criteria for evaluating deletion requests. We can see here the data's subject role in public life, which shows the tendency in general to make a balance between public life and private life. The nature of information, biased towards an individual's strong privacy interest or biased towards a public interest. The source of the information. Now, if you are unlucky enough to get posted in a very, very important newspaper, then probably your request will not be followed because New York Times is more cool than you. And the time, this is practically the passage of time. The arguments that Mr. Gonzalez invoked, the fact that it passed too much time, the information is no longer relevant. And this thing is actually particularly relevant for crime records because if somebody committed a crime in the past, that crime is not legally relevant after a certain time. And the problem is that online, if an article was written about that information, online it remains. And this is a problem. So we have requests on this matter. Now, here are some articles that have been erased from Google search results, which I found on Google. And here are the people that were forgotten. Now, let's see what we forgot until now. We forgot that Team Blackstone, a former poor star and brother of Baroness Blackstone, was found guilty of insider trading in 2003. And this is like the Pink Elephant, basically. This is what I'm telling you. It's not really consistent. We don't really forget something when we see something like this. We see the name, we see the source, we see practically everything. We, a Spanish court ordered an investigation into allegations that Saudi billionaire Prince raped a woman on a yacht in Ibiza in 2008. An article detailing in 2003 how the Roman Catholic Church reached an out-of-court settlement regarding a formal, regarding... Yeah, I mean, you can read it. There it is. Okay, so practically this is an illustration of why the right to be forgotten should have a connection with the psychological way of forgetting. And this is how the paradox of forgetting works. Now, reputation management sites talk about the boomerang effect, the fact that what draws attention, that if we draw attention to an information, in reality, we actually get everybody to speak about that pink elephant I was talking about the first time. And also if the request is accepted by Google, a notification regarding the fact that that article was deleted in the name of the right to be forgotten gets posted, so that draws even more attention. And then another thing that's happening is that there are sites currently that actually collect this kind of links. So this is, it's like a collection. And also Wikipedia had a certain list. So it's like, we don't really forget. I mean, it will be hypocritical to say that it's actually consistent what they're saying. Now, let's see, let's get even more deep into the technological side of the things and see what happens if we are trying to delete something in a database. And this data is valid, not just for MySQL, it's valid for most of technology. Now, in Figure 1, we can see here the database before deletion. We have five records, C1 from C5. And the I is the input, the S is the end. And record 3 is marked for deletion. And is also linked to a garbage offset, which is a collection of deleted records and currently available free space. Now, the database, what happens with the database when the record gets deleted, when we want something to be deleted from the database? The database searches for the data and it starts from the input and goes to the end, to the S, to the end. And if that C3 is not found there, then the search will show that that information is no longer there. But in reality, that information remains there and gets stuck there until we need more free space. And after a certain amount of time, we can speak about a real deletion of that information. So, in reality, it's not a real deletion. And also Vivienne Reeding, the EYP of the European Commission said that it is clear that the right to be forgotten cannot amount to a right of a totally racial of history. Now, here we have another interesting statistic regarding the major investors in AI technology and the numbers related to the information that is collected by us. And as you can see here, Facebook and Google and Google Drives and Google Photos and so on, they collect a massive amount of information. Now, they also invest in AI. Now, let's think what goes on with those two data that I just told you. In reality, they take our data from us, which we offer so voluntarily, and they are using it in order to improve their AI systems to make them more good for production. Let's see what is AI. I made a really long story short. This is actually what AI is, like very short. So, we have here the reinforcement learning, but I would like to speak more about the neural networks because as you can see here in the drawing, we have a neural network. We have an input layer. We have different layers, which can be one or two or so on, and we have an output layer. Now, the crazy thing that happens is in the middle where it can be just a hidden layer, but now I just selected an image with three. So, inside here, when the information gets inside, no matter if we anonymize it or whatever, it goes like this. It goes cyclical. And in the end, we can get an output that is similar to the input. Like, for example, if I'm thinking about an AI that refers to a dictionary, or can be a totally different idea. Now, this is the status of the AI currently and their performance. So, as you can see here, they're not so performant, and this is Apple's most advanced Face ID technology, which, again, is not so performant. This is actually an article, and these two women were like colleagues and they were recognized, even though Apple said that someone is trying, that the probability of that happening is one in a million. This is a recital from the EUGDPR regarding your right to be forgotten. And the fact that we have the right not to be subjected to a decision-based solely on automatic processing and which produces legal effects concerning us without any human intervention. So, this applies to the AI. Now, erasure of one's data. They tried to erase data from AI in order to respect the right to be forgotten. And they got to the conclusion that it may work though the research made up to this date was conducted on information randomly selected. And if we're speaking about the right to be forgotten, we speak about the specific information that is from the AI, and they didn't test it until now. So, we don't know exactly what goes on if we say that Mr. Gonzalez shouldn't be there. We just know that if a guy shouldn't be there. And if Mr. Gonzalez has a characteristic which makes the AI more valuable, like it has, like, I don't know, something that is related to the class of the AI, then the AI will lose performance. So, it will impact the AI. We cannot really erase the data from such a system. Now, also, another practical thing is the fact that according to the GDPR, a data analyst should require every time the consent of the person in order to have to make an analysis on that. And this is problematic again because we cannot really get consent every time in order to analyze the data. And if we add also shared environments or cloud computing for implementing performance, the problem gets even higher. Now, functional encryption algorithms, they don't work on big data. Cell anonymization, this does not comply with the GDPR because the GDPR says that the same thing applies to cell anonymization as well as the anonymization. And data anonymization, which means blurring the sensitive data into derivative form, and they use that and has no practicality. Now, possible approaches. The strategy is of perspiration. This is a strategy that dates from the Second World War, and it reflects the fact that if I'm throwing you, I'm bombarding you with lots and lots and lots of information, then maybe the relevant information will not be there anymore. It will not be visible. So when we speak about lots of data, this is like one strategy that they thought about to try to bombard situational things like this with lots and lots of information, and in the end, people will not focus so much on the relevant data. So in this case, actually this case does not work so well in AI, of course, but it's the main strategy that they are trying to implement in order to comply with the right to be forgotten. Data anonymization is actually the only one that can work because if we don't share our information, if companies don't get our information, then we don't speak about that information. So we should like advocate for the right to privacy, and this is what I would like you to remember from this talk, the fact that we should be more careful about our privacy because in reality, if something goes online, it cannot be really deleted, so it will stay there. So this is a major problem. Yeah, we have the right to be forgotten. Yeah, we have the GDPR, and it's like a wonderful thing in Europe, the fact that we will have the GDPR, but in reality, we have the technological impediments that will not allow us to delete, and also we've seen what happened until now with people that are like very, very particularly interested on what Google gets off from their searches. Okay, so I would like to advocate practically for the right to privacy, and take back control over your data. Think that we have the GDPR, let's make Europe a state of the art in terms of respecting privacy, and also think about new technological aspects regarding how to deal with AI, and think that from the open source community you can invent something which can deal with deletion first, and also think about the fact that if you are managing to think in the future it's something that helps people not to recognize that person, the user, like strategies of not recognizing the person would still comply with the right to be forgotten. So if you're thinking about something like that, that would be like amazing from the technological aspect. Also use ethical products that respect your privacy as a user, and I selected here some of them. Mastodon, Matrix, and Creeppad. If you want to hear about Creeppad, there's a talk at 4 o'clock. Thank you.