 The topic of my talk is Siri profiling citizens in a welfare state. I'm Ronald Haussen. I represent the platform for civil rights protection in Holland. I work together with a group of NGOs. We sued the Dutch state because of an algorithmic profiling system. It was being used on Dutch citizens in the social security sector. First I give a short introduction of Siri. Then I will go into our problems with Siri while we sued the state. Then I will shortly go into those changing public debate on Siri. After that I will give you the most important points of our court ruling and speak a little about the positive developments that are a consequence of this court case about me. I am a journalist and researcher. I work together with NGOs on privacy and civil rights. I focus on digital technology in the social domain as opposed to police digital technology and more in the social domain of people. Getting help from the government and receiving social benefits. First to introduce Siri, I want to show you a YouTube video, a nation video we made to introduce the topic. As a citizen, you share all kinds of personal information with the government. When you apply for tax, you need a roll-out, ask for a parking permit or take a dog. You give this information in good faith and do not expect it to be used against you. But since recently that has happened. The government can now connect your information with all other sources and in the hands of a risk model to determine whether something is in your hands. For our government, you are suspected for example. Since 2014 the system is called a risk indication or Siri. That system is used to scan citizens on women's risks. That happens without you knowing it and without your consent. If you seem to be enough for Siri on the used risk model, there is automatically a flag in your name. Maybe you haven't done anything wrong yet, but government instances keep you sharp in the holes. You don't know that. You don't get to hear that yourself. But how does Siri actually determine whether you are at risk? That is also secret. Are you already suspected? Look at www.bivorbaatverdacht.nl Well, let's first talk about Siri's goals. Siri, which means system risk indication, is a risk profiling system that the Dutch government implemented in 2014. It aims to produce so-called risk notifications on citizens. According to Siri's data analysis, these citizens have a heightened risk of not complying with a very broad collection of laws in the domain of social security. Citizens who are flagged by Siri end up in a so-called risk register, a registry of risk notifications for two years, which government authorities can access to use the risk notifications to further investigate someone. The goal of Siri, according to the law, is quite a mouthful. It's the prevention and combating of unlawful use of public funds and provisions in the field of social security and income-related schemes, the prevention and combating of tax and premium fraud and non-compliance with labor laws. If I try to explain this to my mother-in-law, I tell her it's cance every citizen for every penny they receive from the government or any penny that the government wants to expect to receive from you. So what sources does Siri use to assess citizens? It's 17 open formulated categories, which are all summed up here. Notice that these categories are described quite broadly and vaguely, so each of these categories can contain dozens or even hundreds of types of personal data and the highest advisory council of the Dutch government advised against the implementation of Siri and said that this list is so broad that it is hard to imagine a type of personal data that is not covered in it. So where does Siri get this data from the five big public authorities, which basically make up the welfare state in Holland, the tax service, the unemployment service, the social security bank, the labor inspection, the immigration service and the local municipality. It works by means of the siloing government databases. So these databases are filled with all types of personal data that citizens provided for a very specific purpose, a verification, whether they have the right to a certain social benefit or a tax cut. But all of these data are centralized and combined for a new, very broad purpose, which is risk indication. How does Siri make use of this data? We're very curious about that. What combinations does Siri make? What defines someone as a risk? What logic does Siri use to assess people? So we filed a freedom of information law request and the answer to that proved to be very disappointing. Almost any question you have on Siri was declared confidential by the Ministry of Social Affairs. According to them, it plays into the cards of anticipating fraudsters. So if people would know what Siri is looking for, they would be able to gain the system. So anything has been declared classified. Let's see. However, we did find out a few things about how Siri functions and how it is deployed. It functions as we call it a neighborhood troll net. Referring to the large amount of data that is collected and the people that are analyzed. It analyzes any citizen in one or more postal codes that are next to each other. And every single neighborhood Siri was used, was relatively poor with lots of inhabitants that are dependent on the social services which Siri scans for. And also have a very large amount of people who come from migrant backgrounds. Which is also one of the problems we have with Siri, which I will go into now. We sued the Dutch state in March 2018 with a very broad coalition of several NGOs and two Dutch writers. Our first goal was to delete Siri from law, never to return again. And we also wanted to change the discourse on this topic to mobilize public opinion on instruments like Siri. Our main problems with Siri can be divided into two main topics. Siri is a carte blanche in a black box. The carte blanche part being the government is free to roam through these 17 categories. I just summed up and basically all the data you ever gave to the government is fair game to use for this new, very vague and broad purpose. And the black box part being there is no transparency whatsoever being given about how you are assessed, what data is used exactly what combinations are made. And this in our view creates a paradox of transparency where the government should be as open as possible to citizens and citizens should have a right to a private life. This is completely turned around in Siri. The government being completely intransparent and the citizens being forced to openness about almost every aspect of their personal lives, considering the 17 categories that Siri is free to browse through. And this in our view is a recipe for distrust, which our co-claimants and author Maxine Febedewadi formulated as follows. If you are no longer assessed on the basis of violating a no-norm, but based on secret risk profiles, the fear of repercussions is present in every context with the government. I think that sums up pretty well our main problem with Siri. It creates a climate of distrust which shouldn't exist in a democratic state of law. Well, since we announced our court case, we saw the public debate on Siri and instruments like Siri shifts. How did this happen? Well, first of all and most importantly, it appears that the citizens you analyze with Siri are not too keen on this. Siri was used in two districts in Rotterdam, in Rotterdam south, the poor part of Rotterdam. And when the labor union part of our coalition spread the word about Siri in these two districts and organized a public event to inform citizens that it was considered very controversial by the inhabitants, they felt they were treated like criminals with the city specifically picking their neighborhood to scan. And a short interview from the Dutch daily trou is very telling in my opinion. I interviewed an attendee on the information event we organized. Deeply offended is Majo Boekeu. The Rotterdamer has been living in the Netherlands for almost half a century, 50 years in which he worked hard as a carpenter that blighted up Sue as the father for his three children. They all have a job, he says proudly. And now this with a swinging arm, he points to the apartment blocks in his Hillesluis neighborhood. They live there, the criminals. I am one too. Apparently I've worked for nothing all those years. I think this is pretty telling how on how you feel when when when the government has decided to pick your neighborhood to turn everybody inside out, looking for reasons to investigate citizens. Other developments in the in the shifting public debate on Syria is that a week after the protests in Rotterdam arose, Major Abu Talib announced that he would stop the the Syria investigation because of a conflict with the ministry about using even police and medical data, the only two types of data that weren't yet in the scope of Syria. The ministry wanted to use these these data too in the in the Syria investigation in the Rotterdam city. The mayor of Rotterdam couldn't agree with that. In the same week, the Volkskathons revealed that Syria hasn't caught a single thruster since its adoption. There were very, very many different reasons for that. But the main reason was were technical and capacity problems, but also a lack of added value. So I can get into that after the presentation. But in my view, it's really showed that Syria really has no extra value on top of already existing less far-reaching instruments the government already has. And last but not least, a week before the the court's session was held, the human special rapporteur on extreme poverty and human rights condemned Syria in a letter to court, calling it a digital equivalent of inspectors knocking on every door in a in a in a district. And I want to go to the verdict that came on February 5th two months ago. So it was a big win for our coalition. The serial legislation was declared non-binding, which is the most rigorous way for for a court to to to to deem something illegitimate. We had a victory on the most important and fundamental points that that that exist in the in privacy and data protection laws, things like transparency, which the court called the leading principle of of data and privacy protection, verifiability, data minimization. So we won on there was we were we were we had a bit fear that that that's the verdict would present a repair option. So that's a few cosmetic changes could be brought to Syria and and the government could just go on using it. But that's not an option anymore. The verdict is very, very fundamental. And it really leaves no room for a system that's that's as far reaching and in transparent as Syria. So we are quite happy about that. The most important point points of victory being insufficiently transparent and verifiable. Therefore, citizens cannot defend themselves against the data analysis carried out. And it's based on fundamentals of privacy and data law protection, data protection law. And one of the biggest wins in our view is that the the the the the the the the the the hate court confirmed that instruments like Syria can have a chilling effect on the willingness to share information. So that it was one of our main problems with Syria that it creates a climate of distrust between the state and its citizens. I won't want to don't want to tire you with long citations from the verdict. But I am now because this is one of the best fragments of the of the verdict. It's citizens should be able to defend themselves against the fact that the risk reporters have been submitted about him or her. But also, if no risk notification is being submitted about a citizen, they still have the right to know how they are being analyzed, what data is being used, they must be able to track their personal data. This really makes it impossible for a Syria like system like Syria ever to be built again. Other positive developments are that several public authorities and governments already are looking at their own data in-house data analysis instruments to assess some based on the the Syria verdict. So the UAFA is the the largest is the unemployment agency in Holland. It's a very big public authority and it's on their own initiative. They are checking their own instruments. That's also and there are also several laws being on hold one at least one big data matching law is being put on hold because of this verdict. It's just it's too far reaching looking at this verdict. So that's also a good thing. Let's see. At the same time, we have the biggest scandal in the Dutch tech service since ever with hundreds and maybe thousands of people forced to pay back tens of thousands of euros without any sufficient legitimation from the tech service. So they can't even prove that they were fraudsters and the people that are being being treated as fraudsters still don't know why they are being treated as fraudsters by the by the tech service. So it is a very bad thing to happen to the people but it did create a perfect storm for the Syria verdict because it is clear to everyone now that analysis can have a very very very big impact on people's lives and it can actually destroy people's lives as we have seen in Holland. So the Syria court case provides very valuable lessons in the strong legal precedent for current and future data analysis practices. But still we need to actively push these lessons because not all governments and public authorities will take their own responsibility. So that's a thing we need to keep an eye on in the future. That was it. Thank you very much for listening to my talk. I hope it's been clear and it's been interesting.