 And I will start with talking about my journey of 2.5 years about bugging the face search engine ClearView AI. So probably the most of us leave photos somewhere on the internet because we want to share them with friends and family. Some of us appear in the media or work at the company where your face appears on the website and the media information, leak information about you because sometimes those websites tell about your political opinion, your sexual orientation or who your employer is, stuff like that. And face search engines crawl the internet, take all those photos and make them searchable based on biometric data, which is nice because you cannot really hide from it. So even if you change your face a little or even if you delete your photo of a while, the face search engine might still know about this. And yeah, this is maybe a problem. And in early 2020, we learned about the company ClearView AI. There was this wonderful New York Times article about this company that might end privacy as we know it. And it told the world about ClearView AI, the face search engine for law enforcement, but also for private people at that time. And at this time, they had about 3 billion photos in their database. And one product they wanted to build were, for example, smart glasses. So I just could use these glasses, look into your face, and the smart glasses would tell me who you are, where you work at, and what you like and bad, for example. And today, this company, ClearView AI, has about 20 billion photos in its database. And they are aiming for 100 billion by end of the year. So yeah, their goal is to be able to identify every single one of us. Two days after this New York Times article, I sent a data subject access request to ClearView AI because I had reason to believe that they have data about me because there are some photos of my face on the Internet, for example, this photo. And I used this photo to ask ClearView AI for my data. I simply wrote an email. And after about a week, they asked me to confirm my identity. I gladly ignored this request. I do not recall why, but I know that I never replied. But still, about a month later, in February 2020, they completed my data access request and provided the search results for my face. And they also told me that they deleted the image that I shared beforehand to initiate this request. And this is the search report they prepared for this search image that I provided. So they dropped the search image a little, and they found two photos that a photographer uploaded without my consent to a stock photo website. And here you can see the photos that were shared online. I participated in a student's project, and there were media reports on this. So that is why those photos were on the Internet. In my, without asking, I got another search report by ClearView AI. So in fact, they did not delete the search image that I provided. And this time, they included eight more faces in the search report, and those pictures did not show me. So ClearView AI is far from being perfect, and here they included lots of images from a Russian social network, but also from Instagram. And additionally to those photos, they provide the URL and also the title of the web page. So you can often learn the name of the person, for example. So I learned that ClearView AI has, in fact, data of my person, biometric data, and they were using this biometric data without my explicit consent. And because of that, I wrote a complaint to the Hamburg Data Protection Authority. And a second minor issue was that the answer ClearView AI provided to me was not complete, but the main issue is that they are processing my biometric data without my consent. And to my big, really big surprise, the Hamburg DPA simply told me, yeah, GDPR is not applicable, we can do nothing. So let's have a look at this. If you look at ClearView AI, we will notice that there is some automatic processing because they are crawling the Internet. It might be true that ClearView AI has no office within the Union, but there may be some customers within the European Union, we will learn about that later. And they are monitoring behavior of European citizens, because if you regularly share photos of yourself on the Internet and ClearView AI crawls the Internet over and over again, they will have a very good profile of the things you do. And there is Article 9, which is about more sensitive data, and biometric data are sensitive data under GDPR, they are under special protection, and ClearView AI is not allowed to use this data without your explicit consent. So here we have the applicability of the GDPR again, and there could have been some exceptions under Article 6, but ClearView AI is running its service only to earn money, so this processing of biometric data is not necessary with respect to Article 6 of the GDPR. That is more or less what I answered to the Hamburg Data Protection Authority. I am not a lawyer, so I just read some articles and wrote the email as an answer to the Hamburg DPA, and they changed their mind. They agreed then that the GDPR is applicable. It might have helped that I talked to a journalist from the German media der Spiegel, and after a while the Hamburg DPA sent multiple questionnaires to ClearView AI. In the beginning they didn't really answer those requests, but after some time they were ordered under a threat of penalties to finally answer, otherwise they would have paid up to 170,000 euros. So in the end ClearView AI replied, and after about one and a half year ClearView AI deleted my biometric hash value, but they didn't care about other Hamburg residents or other German residents, and they didn't tell what they would do if they find new photos of my face in the future. And till today this complaint at the Hamburg DPA is open. There's no final decision. We will talk to them again in August. And yes, since this Spiegel article here appeared, I'm supported by Neub, none of your business, which I'm very happy about, and together we try to find a way to have your final decision soon. So it might be interesting to know who uses ClearView AI in the European Union, leaked documents that were sadly leaked only to one news organization and not to the public, so that law enforcement agencies, companies, but also some rich individuals were able to use the search engine and with the help of freedom of information requests in lots of European countries, but also with the help of members of parliaments in different countries, we learned that most countries do not know much about the use of ClearView AI by officials in those countries. So they often said there are no life or historic contracts, or they are not aware of the use of ClearView AI. That is something the European Commission said as well in July 2020. And members of the European Parliament were not so happy about this answer, because at this point in time there have been lots of media reports of also European companies that have used ClearView AI, and not just companies, but also the police in Sweden, for example, use ClearView AI on a workshop that was organized by Europol. So yeah, while Hamburg had a good start, I would say other countries developed faster than ClearView AI in the meantime was deemed illegal in Australia, Canada, the UK, but also countries of the European Union, including France, Greece, and Italy, there have been at least two fines up to 20 million euros, and there have been orders to delete the data of residents of the corresponding countries. A good question is how to enforce those, but it's a good start to finally decide that ClearView AI is illegal within the European Union. Then ClearView AI is of course not the only search engine. There's also PIMEIs. PIMEIs might be more dangerous than ClearView AI, because PIMEIs is public, can be used by every one of you, and it's basically a different search engine, but the same story, because Hamburg is doing nothing again, and PIMEIs in the beginning was based in Poland, but moved to the Seychelles to be better able to ignore the GDPR, and we learned that DPAs, data protection authorities, are too slow and GDPR does not help against the ClearView AIs of this world. And now APUNC will continue telling you about the initiative ReclaimYourFace, which might help to improve the situation in the future. Hello, hi, hi, hi, hi. Yeah, always wanted to perform on a circus tent. Never knew that it was about biometric surveillance. So I'm talking about ReclaimYourFace, an initiative that kind of turned into a network over the last one and a half years, done by EDRI, the European Digital Rights Association, where basically a rooftop organization for all the European Digital Rights NGOs. Yeah, and it's about biometric mass surveillance, and we basically demand that it is completely banned. We've heard about photos now, mainly photos, but of course biometric data is much more, it's your fingerprints, it's your voice pattern, it's your typing pattern, it's the way you walk or the pattern of your veins or whatever, and the problem is that, yeah, you usually keep these data for your whole life. So once it is in a database, it will stay there. Yeah, these slides are from EDRI and I'm basically preaching to the choir here a bit, so I'll do it fast. Of course we have the problem of general monitoring on, I don't know, parking lots, public transport, and in every corner of the city you have the cameras. You can do predictive policing with it. We all know it doesn't work, but yeah, the research goes on to work on that. The problem is that it is, yeah, kept in databases, I don't have to tell you about that, and at the borders you can get into problems if they, in real time, compare your face to the, I don't know, unwanted members of community. And as Cantolco told us, there are a lot of pictures of us in the internet. You basically can't do anything against it in this time. So Reclaim Your Face is a civil society initiative, a European citizen initiative. It's, if we do collect like a million signatures, the European Parliament has to hear the complaints. And there were demands and there was a new law on the AI Act, the Artificial Intelligence Act, and the first proposal came out this April, and yes, some of our demands were heard. So, yeah, real-time recognition is not allowed, and remote biometric identification should be prohibited. But, of course, there are too many exceptions. There is a very vague wording, as we unfortunately have it often in European Union laws. Too many exceptions, especially for the law enforcement. And just because real-time recognition is not allowed doesn't mean that they can take all the material and scrape it afterwards. Like they did in Hamburg on the G20 Summit. They collected terabytes of material and went through it afterwards. It's still pending if that was legal or not. So, basically, if you are interested in these problems, yeah, please join the network, act on a local level, and sign the initiative. It's still going till the end of the month. Reclaim your face, and it helps to have a strong voice if you have a lot of signatures. And yes, we have stickers. Thank you very much. I'm giving the microphone to Lutter. Well, hello everyone. Since we're in the Netherlands, I'm going to tell you a little bit about the Dutch situation regarding facial recognition. And especially on Clearview AI. I myself also did a data subject access request at Clearview AI. And also my face is in this database. I had contact with the Dutch DPA also about filing a complaint, because we see now in a lot of countries in Europe that this is quite successful and that the DPAs are critical. But the Dutch DPA had a kind of different perspective, because they told us you should not file a complaint directly to Clearview AI, but you should file a complaint against a client or a customer of Clearview AI. And we thought this was kind of weird, because as Kantoorkel just also argued, the GDPR is applicable to Clearview AI directly. And it's quite hard to find out who these customers exactly are. So we worked together with a Dutch journalist to get some insight in what kind of governmental organizations were making use of Clearview. So we did FOIA request, we asked around, but nothing except the numbers that some of you might know that are published by Buzzfeed. And that told us it's the Dutch police. So we even asked directly the Dutch police to explain these numbers. But there we just got fake answers also from the minister, kind of obfuscating this whole situation by saying something like we're not using this as an organization, leaving all the space for individual cops to use Clearview AI. So now this shows several things. It makes it very hard for us to file a complaint at the Dutch DPA, since they have this different perspective. It also shows that we have one GDPR that should be applicable equally. And that's not the case. So this is a bit weird. And it's also why we would like to have a talk with you in a more interactive way also, because we're here together with a lot of people from different countries, people who might have tried different strategies, who might have to deal with different DPAs, who are in different situations. So that would be very nice. This is some things that you could do if you're worried about facial recognition use or other biometric surveillance, for that matter, in your country. It's also meant as a kind of suggestion or inspiration. So Epunk told you all about the reclaim your face petition. You can sign that it does help. You can also file your own access request, see whether you're in the database. It's also a good starting point for a complaint at your national DPA. And there's also other ways to put pressure, especially if you're Dutch. I would really like people to do this, because all the fines in the other European countries really show that the Dutch DPA has a perspective that's not common. And they choose to have this perspective. So I think it really helps if people put pressure on the Dutch DPA to say, hey, look at these other countries. There the DPAs are protecting the citizens. Why are you not doing anything? We want you to do something and stop this. Well, we would like to organize this struggle a bit. So you can see this talk as consisting out of two parts. This is the end of part one where we talk to you. And we would really like to have a part two where you talk to us, we will also talk back, to have a bit more of an interactive session where we can share ideas, share experiences, learn from each other, get organized, and see if we together can stop the use of biometric mass surveillance. Thank you. Can I final remark this meeting will be at the sea base? And we still have a few minutes for Q&A, if you don't mind. So if everyone has a question, please walk up to the microphone. As there are no questions from the internet, let's start at the back microphone. Please step closely to the microphone. Yeah, hi. Great talk, first of all. And great to hear myself. But for the first speaker, so what does it mean when they remove your hash value? Because if they explicitly tell in their database, this is that hash value, they still have data on you. So I'm curious if you can explain a bit more about how that works, because they can re-get your original image and start all over again. So you have to send in a request every year to get removed, or what's your experience in that, or you have more experience about this topic. Well, that is a very good question because that is exactly the point why we are not happy with the state of this complaint. Second question for the last speaker about the Dutch situation. So what was their response when you referred to Italy and the other countries billing or even finding countries that were a part of the EU, and so they should commit to the same regulations. It's the same law for the whole European Union as far as I know. Yeah, you would say that. So it's also not really clear to us why they take this stance specifically. We had conversations with them about maybe filing a complaint, and they just gave us their perspective and said like, yeah, you can come to us, but then file a complaint on the users. And these were the reasons. And after other DPAs gave the fines, we asked them and they said, yeah, yeah, yeah, yeah, that's also possible. It's really fake, but still it doesn't look like they really changed their mind. It looks like they don't have the hacker's mentality. That's for sure. Yeah, thank you. Maybe you can help us in giving them that mentality. Do you have another question? Please come up to the microphone. Yes. Thank you. I was wondering if there was maybe a chance to turn their system against them using something in an adversarial manner, stuff like this person does not exist and generating a lot of faces in a way that wouldn't match sort of your hash value, whatever this means. Or is it today it's something that's too easy to counter? Is there any sort of potential in there? No, there might be some potential, but you would help only very few people that do that. So it's not a final solution. So maybe you could help protecting yourself a little better by doing that. But I don't think that that is something you could do on scale. Okay, thank you. Hello. Thanks for the talk. Some things are quite ironic. I mean, you have to stand there with your faces. Someone's taking pictures. There's video everywhere. If you do this request, you have to send your face. If you want to fill in the petition, you have to enter your data. It seems like the whole system is set up to block us from doing stuff. It's not really a question. But first, thanks for being here and actually showing your face and doing this. But yeah, any general thoughts maybe you have here or something we can do and change this whole thing? Yeah, that is a very good comment because I'm here for a specific reason. I'm here to tell you about Clearview AI and I'm not here to share my biometric data. So in the future, I still want to be able to attend the protest without the fear that every policeman could be able to identify me by simply taking a picture because that might hinder me or you attending a protest. So yeah, I think it's necessary to show the face. But you have to think about the reason why people are sharing the pictures online or here. Yeah, the bigger picture. Thank you. And one last question from the microphone at the back. Yes. I've heard that a lot of people are using artistic measures to seclude their facial, you know, what is called, you know, the form of the face. So I've heard that it doesn't work that well. Are there you mean that people, for example, mask up to kind of stare at AI the wrong direction with dots and lines? Yeah. What happens kind of is that in the beginning, it works because you're covering data points. So you make it harder for the technology to detect your face. But then, for example, during COVID, I think it happened. A lot of people had to mask on a lot. So also a lot of pictures were made of people with the mask on. So they created a database with faces that were masked to train the AI. So now they're better capable to recognize faces that are masked. And I think this dynamics, you also get with other things. Like you could now think of something else to cover a different part of your face or make it harder for in another way. And it will help temporarily. And then probably the AI is trained to, well, make it possible again. Sorry, this is very pessimistic. That's awful. Thank you for the answer. And for the talk. I do have to cut you short. I'm very sorry. We are out of time. If you want to talk to these wonderful people, come by at the seabass, which is currently on tour vaults, I think, correct? Yeah. Yeah, pointy black. And there's stickers and that these people and there's probably Marta and other stuff. So please drop by and please give, can't talk a lot and I eat punk another round of applause. Thank you very much.