 And as Studium Generale, we organize all kinds of very interesting events on topics that matter, that's what we always say. You can see some of them here in the back, actually, that we have coming up. And maybe also really nice to know is that we have a movie screening coming up on the 1st of November, which is also always really nice, of course, just to watch a movie together, maybe with some of your peers. So definitely check that out. And also good to know is that for students from Tilburg University, we have a Studium Generale certificate that you can get. If you visit five of our lectures, you can write a small report about it, and then you can get this certificate as well. More information on the certificate can be found on our website. And then we go to today's topic. Often you hear the saying if the system cannot do it, it cannot be done. And this kind of computer says no, mentality can have really bad and grave consequences, right? For example, with the Dutch benefits scandal that I'm sure most of you have heard of. But as we use AI more and more in our legal and democratic institution, there are maybe some steps that we should really not forget to think about what are crucial steps that we should not forget to take. And also when it comes to our rights or even our human rights, in relation to AI technology, we should really think about how we want to protect them, maybe. And someone who can tell us more about this, I'm very pleased to welcome you to Lynette Taylor. She is a professor of International Data Institute for Law, Technology and Society. It's called TILT here at Tilburg University. She's also part of the ERC funded Global Data Justice Project. And her research interests include new sources of digital data and governance and also the issue of human and economic development. And also nice to note is that she is part of the Gravitation Program, the Algorithmic Society. It's part of the Dutch Research Institute or Council, I should say maybe, and WEO. So please give a big applause to Professor Lynette Taylor. Hi, thanks everyone for coming today. Thank you so much for inviting me to do a lecture for Studio Generale. It's a privilege and an honor. Today I'd like to talk a little bit about how we're addressing AI in terms of government, in terms of governance, in terms of how it represents our interests, whether it can represent our interests, and what a just situation looks like with regard to AI technologies. So if I can get this to work. There we go. So as Hannah introduced me, I'm Lynette. I come from a very varied background, including a degree in the humanities. So I don't come from either a legal or a technical background. And so I approach the question of technology governance from a sort of interdisciplinary and societal perspective that includes questions of how our behavior and our regulations in the EU have effects on the rest of the world, how countries relate to each other, and how people relate to government. So the problem that I'm going to talk about today is that digital technologies and the problems they can cause us have changed quite a lot over the last decades. I don't think this is going to be a surprise to anybody in this room. But the way we guard against any harm that these technologies create has not evolved with the technologies. We're still reliant on some framings that were current really in the 1980s and 1990s for thinking about how to protect ourselves and how to assert our rights. Ideas of responsibility, ethics and law really require updating. And there are a lot of us in the field working on this at the moment. And so I'm going to present some of our work on this to you today. So the way that harms are evolving is interesting. Does anyone know the project Brainport next door to us here in Eindhoven? Put your hand up if you've ever heard of Brainport. Okay, so a bunch of people have. Brainport is a public-private partnership with the city of Eindhoven to create innovation, to create economic advancement through innovation, and to transform the region around Eindhoven in line with Eindhoven's technological ambitions. This is a project within that, within the Brainport crew of projects. And it's a project that is being built I think right now in Helmont. And it's a project where people will live in reduced rent housing and return for their data. And what does that actually mean? Well, people will be giving up all kinds of data about themselves, including data from personal smart devices, data from their houses, data about their social media usage and inputs, data from the local health clinic in this particular community, that is being built from the ground up to house this development. And so if you move into this development, you'll be living in a house that is specially enabled to capture all the data about you that you produce in the course of the day. People are not allowed to put cameras in their bathrooms, for instance. You know, there are certain limits to what people are allowed to sell about themselves. But the stated aim of the project is to see whether people can conceptualize that private lives are something that they can market, something that they can produce as a commodity and sell. This is really the experiment that is going on in Helmont. These are the types of data that they're interested in, in collecting. And I would say that living in a development like this really shows a move on the part of city authorities and innovation authorities from experimenting on technology using people, which is what we do in living labs, for instance, to experimenting on people using technology. Here are the organizations involved in the project. They include us, they include Tilburg University here at the bottom and a bunch of other universities as well. There are people at Tilburg involved in managing the project, but also taking part in the project to study digital forms of democracy. Excuse me. There's also a lot of private sector actors here, Ziemens, Phillips, KPMH, TomTom. Also, you can see that this is sort of a 360-degree take on human life and the companies that interact with different aspects of human life, Albert Heijn, Jumbo Blocker, so really everybody. And right up the top, Telpa. Does anyone here know what Telpa does? Has anyone heard of Telpa? Exactly. It's a television company run by John DeMol. Telpa is the company that was behind Big Brother. Have you heard of the Big Brother television series? Okay, so there are uncomfortable similarities, I think, between Big Brother and this Brainport development over in Helmont, where people will be expected to live completely openly in exchange for reduced rent. It's pretty much signing up for Big Brother. Why might this be problematic? Well, for one thing, you're not only signing away your own data-fied life, your own digital existence. You're signing away that of your family. And as you saw in the picture, I think, which I showed, whoops, there will be children living in this development, right? People will come and visit the families who live in this development, and they will also be captured in terms of their data. So it's not just the person signing up on behalf of the household, or it's not just the adults signing up to live in these houses, whose opinion matters. It's really their communities. This is a networked problem. It's a family and a societal problem. Sorry, going forward rather than backward. So there's a wonderful legal philosopher called Julie Cohen, who says that essentially we're being farmed for our data nowadays, that we live as part of a biopolitical public domain. This is what she calls it, where although we talk about privacy and we talk about data protection, in fact, a lot of the data that we care about and that really reflects who we are as people has already left. The horse has left this table. There's very little we can do about the forms of data that we really care about. And so data protection sits rather uncomfortably on top of this sort of structure of data extraction and marketing as a way to make us feel better about not everything being available. But she says that really prevailing practices in the surveillance economy seem to brush the claims of privacy and rights aside. So what are our tools for addressing this kind of problem? Well, several of them really don't work very well. People think about anonymized data. If you click I agree to a product or service or website on the internet, you often find that they'll say things like your privacy is important to us, or we're compliant with data protection, or none of this data will relate to you personally. None of the data is identifiable. And really providers are borrowing the language of the law to convince us that their services and products will not violate our rights. Whereas in fact, there are really limited protections for us. Anonymization really doesn't work very well anymore. The kind of location data that will be collected from people in Helmont, among other things, is really quite personal. And although you can, what's called aggregated up, so you could, for instance, look at the location data traces of everyone in this black box today and say, well, we're just going to look at the group, we're going to take, we're going to aggregate up to the level of all of the people in here and look at where they all are. In fact, given that everyone walks out of here after this lecture and goes home to a separate place, we're all highly identifiable from our mobility traces. And here's some scientists in London who've studied how you can anonymize or not anonymize people through location data. And they say it's very, very hard not to make people completely unique in a data set. In fact, the uniqueness of people's mobility traces decays at only one tenth of the rate of their resolution. So a very coarse data set that has a lot of people in it still doesn't provide those people with anonymity, which is kind of counterintuitive. And so a lot of those who use our data get away with this because people don't understand how identifiable they are, even if their data doesn't have their name on it. So the protections of data, sorry, the protections of data protection are not what we would hope for them to be. They really only deal with personal data, and that means data which is identifiable directly to you. So if there's no individual identifiability, there are no rules about what can be done with data. Second, there's a lot of loopholes in data protection. Anything that's done under the rubric of research, for instance, a lot of the data collected in this Helmand development, doesn't really fall under data protection rules. And the actors involved in that project, in fact, are saying that they're pushing the envelope on data protection by doing this. They're planning to use data in ways that is currently outside the law. Data protection is also based on a very old-fashioned idea of who we are with regard to legal protections. It assumes that we know exactly what is happening with our data, and that we can make claims about it, and that we can isolate instances of violations of our rights and relate those to legal protections. And I, for one, cannot do that. I don't know if anyone in this room feels like they have actual control over their digital data, but it's highly unlikely that we're even aware of the amount of data that we're streaming to data brokers and the data market on a daily basis. So really data protection is formed around an image of data use that doesn't really apply anymore, and we need an update quite urgently. In practice, here's an example of a limitation of data protection. It's getting used as a shield by the UK Department of Health for its interactions with the private sector around health data. So during the pandemic, the company Amazon decided that it would partner with the UK government using its massive computational infrastructures and servers and cloud services where UK health data would come out of Amazon's smart speakers when you ask them a question about health. Now on the surface of it, this seems like a great idea. You don't necessarily want American health data coming out of UK smart speakers. You want something that is relevant to research done in the UK. So on the whole, we think this is probably a good thing. However, there was some litigation, some civil society organizations pushed back on this and said, Amazon also has a clause in the contract apparently where it can use this data for commercial purposes. We really want to know where our publicly funded health research is going to be sold and how we're going to benefit from that. Can you please make the contract that you used for this agreement open so that we can scrutinize it? And this was the government's response to this freedom of information request. They said, the public interest and the disclosure of our agreement with Amazon is largely focused on the sharing of personal data. And here we see them using data protection as a loophole to get out of transparency. The redacted clause is in the agreement. So the things that we hid when we made the agreement public cover unrelated commercial issues. Therefore, do not advance the public understanding of the issue of sharing personal data. So what the UK government is saying here in response to a legal challenge about transparency is you don't get to see the contract because it's not relevant to what happens to your personal data. They're saying our agreement, the way that we're selling your data is simply not relevant for you. And so you don't get to see the agreement. A lot of people have felt this is really problematic, but arguably data protection law is not designed to cover commercial secrecy. It's only designed to protect data which is entering the market. And so if Amazon can make the argument that it's not selling individual personal bits and bytes of data that identify people directly, we have no way to scrutinize the involvement of big tech with our health infrastructure, which I find an interesting problem. This relates to an issue that came up in some work that I did with colleagues quite a while ago on something that we're calling group privacy. And whether group privacy is a good name for it or not, I'd be interested to hear from you in the Q&A that follows this. But it's based on the notion that actually what we're interested in is protecting collective privacy. Traditionally, for privacy law and philosophy, a group is really a collection of individual privacy. So for instance, a corporation, a corporate body like a firm or university or a state, or a collective, people who come together from shared interests, for instance, in a political party or a social network like Facebook or Twitter. But in terms of data analytics, in terms of how big data sees us and AI sees us, a group is actually fluid. It changes over time. It's based on a definition like the people on the bus or the members of a university. For instance, the people at this lecture, if somebody decides they're really bored with this and they get up and leave, and then somebody else comes in and decides they're interested in the lecture, there will still be a coherent group of people called the people at the lecture, the people in the black box, but there will be different members in here. And so the group can change, but the definition of the group remains the same. This is how big data sees us. It cares about the group, the type rather than the individual. So today, personal identifiability is a really problematic way of cutting up our rights and enforcing our rights. For instance, you may remember there was a scandal a couple of years ago where Strava, which is a running app, it's an app which talks to your Fitbit when you go running and records where your run went, basically. So it creates maps of people's runs. And Strava released all of these maps of people's runs all over the planet for data analytics, amateur researchers to look at, for academics to look at. They thought it provided an interesting geographic mapping of where people like to run. It turned out when researchers looked at it, it also provided an excellent mapping of U.S. Army bases overseas because soldiers tend to come out of the Army base and go for a run around the Army base where they're relatively safe and then go back into the Army base. And so Strava's maps actually provided a way to identify all of the U.S. bases in Afghanistan with precise geographic clarity. And these things are actually officially blurred on satellite images because they're supposed to be relatively secret. So the way that these types get used is in profiling and intervening. You've probably heard of data profiling. Excuse me. Basically, as I've been saying, data analytics will capture types usually based on proxies, a particular factor of interest rather than tokens. So an example of a factor of interest is if we're interested in identifying who might commit benefit fraud, then we could use data analytics to identify everybody who receives a welfare benefit and makes less than 20,000 euros a year and has dual nationality. Does anyone recognize this? It should be familiar to anyone who is Dutch here. Similarly, you could do risk scoring with non-personal data, so data which is not protected by data protection regulations. Assuming, for instance, that everybody who matches with a given mobility profile is a political activist. For instance, people who go to a particular place one time a month and that place is associated with the meetings of a particular political party. You could then assume that those people are members of a political party. So non-identifiable data still provides a lot of ways to predict behavior and predict the behavior of people who are like us. It gets much more complicated with data from the internet. For instance, you could say all of the people who have searched for a particular book on Amazon in the last month and people who buy a particular type of shampoo and people who drive a particular car will buy a particular thing. This is how data analytics works in the data market. People who have seen a psychologist in the last 10 years and who live in a particular area of town. These are the kind of profiles that you find online. They can seem really, really random, but the things that pop up when you search for that group in a data analytic grouping mean that you can actually sell things to that grouping. These types of groupings have also started to be used for political marketing, as we know from the Cambridge Analytica affair. They've also started to be used for security groupings, for risk analysis from a security perspective. And so they go way beyond the commercial sphere now. All kinds of things which used to be only commercial profiles and are used in all kinds of ways we would not necessarily expect. And this is problematic because this type of data is something that you might call ontologically constitutive. And what do I mean by that? There's a philosopher called Luciano Floridi, who gives a great example. He says, my in my data is not the same my as in my car. And this is kind of how data protection sees us, right? That you have data that belongs to you that you ought to protect. He says, it is the same my as in my hand, because personal information plays a constitutive role in who I am and who I can become. And this is a very different view that goes along with the notion of the group as the constitutive body for protecting rights. So data which have been anonymized or which never were individual can still be constitutive. They can still form part of us and be important for identity building process and for what we care to protect about ourselves. They can tell us about the lives of groups, networks, villages, cities, states. Does anyone have an idea what this is? What is this photo of? It kind of is a war zone indeed. So this is a photograph from the human, the Harvard Humanitarian Initiative. It's a satellite photograph taken from space, obviously, of South Sudan. Before it was South Sudan, when there was a war going on between North and South about territory and about separatism. And what the Harvard Humanitarian Initiative does is they are human rights advocates. And they do machine learning techniques on satellite images to try to understand where militias have burned villages and committed massacres. And they do this in order to demonstrate to lawmakers that they should do something about it. So this is actually human rights data that we're seeing here. However, the satellite images that they get upload to their servers maybe three times a day. And they found that at each time that their servers updated, there were a massive, massive numbers of hits from foreign IP addresses. And they tried to figure out what this was. And they discovered that the North Sudanese government, which was the aggressor in this conflict, was using their data on which settlements had been burned and destroyed and where massacres had taken place. They were using these data to target future attacks. It was a nice map, essentially, that the human rights advocates were providing to the aggressor to understand where they hadn't hit yet and where they should send their militias next. My point in this is that this is not personal data, right? This is images of settlements burned by people. And it's used for training computers to understand whether settlements have been burned by militias or have been gutted by wildfires or where people have just left the area. And so it's for training computers to identify violence, basically, through satellite imagery. But there is nothing officially defining this as personal data. And yet it is some of the most personal data on the planet. This data posed a very immediate risk to the people in the next village. And I think this provides a very good example of why group privacy is just as important as individual privacy. Because we don't know who is aiming for our information and we don't know what they're planning to do with it. And so our existence as part of a type, as part of a group that can be identified with a profile, is as important as our existence as individual rights holders. So my work looks at how we can take all of this in and try to produce sense out of it for the law in particular so we can get better protections. And in order to do that, I take a social justice focus on these problems. There's a philosophical Nancy Fraser who talked about how justice doesn't work anymore. And she talks about it in really interesting ways. She says that thanks to globalization, we've seen huge disruption in the way that justice claims are made and understood. And I think some of the examples I've just been giving about groups versus individuals start to go in this direction. She says deviation becomes less the exception than the rule because there are disruptions in geography. For instance, when we make a claim against big tech, we have to make a claim in the direction of Silicon Valley, not in the direction of our local government most of the time. And so if Google or Facebook is experimenting us on us, there's very little we can do locally to make a claim about that because these relations now extend beyond our territorial states. Technology is transnational. She says also the subjects of justice claims are individuals and collectors, as I've just been explaining. For the South Sudanese to say that's my personal data in that satellite photograph, and I'm going to make a claim based on it, is ridiculous for multiple reasons, one of which being it's not their personal data. It's about their land, their territory, their neighborhood. And it's about the villages and the villagers that form their social network. Who gets to determine what is just? With regard to that satellite photo, to whom should those South Sudanese people actually make a claim if they're going to attempt to do such a thing? This doesn't align well with the way that we exert human rights claims, that we exert data protection claims. This stuff simply doesn't match up very well. And the nature of redistribution and redress. Really, data protection in particular is conceptualized around quite simple forms of redress. We will stop using your data in this way. What if the harm is longer term? What if the harm is not going to go away? What if it's a cultural harm? What if it's a political harm? We're not well equipped to think of these in relation to traditional theories of justice, which really talk about economic distribution of resources and how to do that more fairly. And finally, something we're all very aware of since Me Too, since the Black Lives Matter movements, which social cleavages can be the site of injustice. Again, it's not just about economic distribution anymore, injustice theories. We have to take into account nationality, class, ethnicity, racialized characteristics, gender, sexuality, disability. The list goes on. People are making claims based on social cleavages that were not considered valid bases for claims before, and also intersectional claims are coming up. I'm a woman. I may also be disabled. I may also be a woman of color. And it's not possible currently under discrimination law to make a claim that goes along two of these axes at the same time you have to pick one. And this is causing real problems for people making claims legally. Here's an example of the problem of geography that Nancy Fraser talks about. Large language models are systems that huge providers like Google use to interpret language automatically. So basically, Google will scrape the internet, use the words that it finds there and the sentences that it finds there to train language models, have computers learn how speech works and how language works, and then those computers can do things like translation for us or can create writing for us. And so large language models are all around us every day. They're there every time we search the internet. This is Timnit Gebru, and she was fired from Google back in 2020 for saying that large language models were problematic in a different way than Google was addressing. So she said that the data centers that Google uses create huge environmental costs, that it is simply not fair on the people who suffer most and most immediately from climate change, i.e. people in small countries, in less well off countries, people quite remote from Sutherland Valley, namely, that they should bear the computational costs of these systems. Second, that their outputs reflect structural discrimination. Google is trying to de-bias AI, and this was Timnit Gebru's job broadly as an AI ethicist in Google's AI ethics groups. But she said, if you scrape the internet, there is no de-biasing that. The internet is a continually replenishing source of prejudice and bias and hatred. Basically, it has good stuff too. But she said there is no de-biasing this. The internet is by its nature biased, and there is no way to stop people's linguistic expressions reflecting our culture, and our culture is largely biased. And third, she said that the fact that she wasn't allowed to say any of these things as an AI ethicist was in itself problematic, and that the territory basically for justice claims should be broader than it is, that she should be able to take into consideration the rights of people's far away, and she should be able to take into consideration issues of bias that Google couldn't see from where it was standing. And she was fired for this. All around the planet, we see groups trying to relate AI to actual concrete on-the-ground social justice concerns, which have been running for a very long time. Things like migrants' rights, civil rights in general, precarity in labour markets with the gig economy, criminal justice, housing, welfare. There are groups all over the planet, as I said, trying to make sure that when we talk about AI, we don't just talk about this very remote notion of bias and unfairness. We talk about the way in which it can stop people getting a place to live. It can stop people claiming the benefits they're entitled to. It can stop people being able to access their most basic rights. And so this is also a move in line with the notion of abnormal justice, that we need to move out of the tech sphere and into the social justice sphere to understand why we care about tech and what we want to do with tech. These are South African benefit recipients queuing up to receive benefits at the post office. And this is an interesting example of who gets to decide what is just. So South Africa a few years ago decided to go from people receiving checks in their hands at the post office for their welfare benefits to people receiving notification and mobile money via their mobile phones. Now this seems initially like a good idea. People shouldn't have to walk a really long way, particularly if they're elderly or disabled to pick up their welfare checks. But what happened was they farmed this out to a commercial firm that would handle the distribution via the mobile phone systems for them because the post office wasn't equipped to do this. And so suddenly the state has an intermediary distributing welfare to the most vulnerable people in the country. And what happened was people would get the text saying, okay, your welfare check is available in your bank account. You can draw on it now. The money is there. PS, would you like to buy some insurance for your house this month? Or PS, would you like to take out a loan? Or PS, would you like to buy insurance for some of, you know, and so they were getting these little messages from other companies related from subsidiaries related to the intermediary distributor saying, would you like to just spend a little bit of your welfare check on this service? And the welfare recipients quite reasonably believed that these offers were coming from the government. And they thought, fantastic, the public sector is offering us these really cheap, you know, accessible services. That's great. I'm going to spend a few cents on this. After a little while, some people were receiving no money at all in their welfare checks because it was all being deducted at source for these special offers coming from the subsidiaries of this commercial company that was the intermediary. And they went to a civil rights organization, which sued the government and managed to get the intermediary kicked out of service provision. However, even though you would think that justice had been done here and people were getting their money back, the post office was no longer able to provide that service. And so it couldn't go back to the post office. They had to keep the intermediary, which had been essentially bankrupted by fines at this point and whose CEO had quit. They had to keep the intermediary as their service provider because they had already digitized the service. And so the notion that it's clear that we can do justice on these kind of claims and who should do justice is becoming complicated. Also, what kind of distributive problem justice is? This is an interesting problem because we're seeing tech firms themselves rise up against those tech firms definition of justice. Excuse me. We're seeing a movement called tech won't build it rising up all over the planet to link the work of the tech firms where these people operate to social justice concerns across society. People are marching for women's rights. They're marching for climate justice. They're marching for gig labor rights. And it's becoming very complicated to understand how we should distribute justice and to whom. So the demands of abnormal justice basically are different in that they involve the recognition of new interests and the representation of those interests before we can move on to redistribution. And traditionally, justice theories have just dealt with redistribution. But as I'm trying to explain today, when we move into the sphere of technology and justice, redistribution is not our primary concern. It's recognition and representation. And we have really very few avenues for doing that in a paradigm of justice, which says you can protect your own stuff. You can make claims about your own rights. And you can see everything that is going on. Here is an alternative framing. This slide shows an image from India where they have the biggest biometric ID system in the world. So kind of like our civil registry here, like when you go to the Khomeinter to register to say that you live there and they take various data about you and they store it. And then when you need a passport or a pension check or a driver's license, you can draw on that data. In India, they've done a massive biometric database, which has now become the way that you get a mobile phone that you register for your exams in school, that you buy a plane ticket, that you get to do all of the operations of citizenship basically. And people are protesting against this because the system is also strongly biased against the poor. It's based on fingerprints and iris scans, which for one thing is problematic because people who've been working all their lives in low-income jobs in India very seldom have readable fingerprints and often have suffered malnutrition. And so their irises are not scannable in the same way as ours would be necessarily. It's impossible to get an accurate read on their biometrics. And so the system serves the poorest of the poor very, very badly indeed. And also, it's almost impossible to make claims if you are considered wrongly by the system, if your data is in there entered incorrectly by the person that scanned you, or if something changes about your family situation, or if something goes wrong and you need someone else to pick up your benefits or to go and transact for you. The fact that it's all linked to your fingerprint and your iris, even if they've been read correctly, is really problematic. And so people are protesting about this because the burden of proving that things are wrong is placed on the poorest. The government doesn't take that on. And what they're doing here is they're having a people's parliament about this system and they're saying, what is the vision of the government and the design of these new technologies? We're going to have a public debate about this. Privacy is now constitutional right. Should we exercise privacy against the government? We hadn't thought we had to do that. They're saying privacy is actually an issue of power and control. It's not just about where our data lives and who can access it. It's about whether you get to exploit us and get to exploit the resources of the country in particular ways. It's about whether you can do harm to us that is lasting. It's about whether you can exploit marginalized communities. How are you defining transparency? And this is interesting because it goes way beyond the classic data protection idea of privacy, which is about bits and bytes and making sure they're corraled into the right space and you know what that space is. Instead these people are politicizing what you might term digital rights and they're saying that our relationship with the state as soon as it's mediated by technology becomes really problematic. And we need to reinvent our relationship with the state so that we remain citizens and not users. We want to be citizens not customers, they say, just like those people in South Africa who have a user agreement with the government suddenly. There's a philosopher called Chantal Mouff who calls this kind of citizenship agonistic. She says that we need to shift from a system where we are seeking consensus and we assume that we will eventually all agree on how we should for instance have our lives mediated by technology to a system where people have radically differing views on this, some of which are actually irreconcilable. Mouff suggests that we may have to get beyond the notion that we can agree on particularly technology and that instead we should try to have a spectrum where some people are allowed to refuse to use certain technologies. Some people are allowed to have fundamental disagreements and continue contesting and where society can still move forward politically and technologically within that framing. And we can see Timnit Gaybrou for instance as being agonistic about this as saying I will never agree with Google and I still deserve a place on the spectrum. And what actually happened is Timnit Gaybrou after she was fired by Google some foundation said would you like to have some funding and start your own organization? And so Sheena runs an organization which is agonistic to Google basically and which will contest Google's foundational ideas of what is fair what is just and what can be encompassed by the notion of doing business. So we have worked some of these in my group some of these ideas into a theory of data and justice and these are the three pieces of it. First your visibility the notion that your visibility through your data the extent to which the state and commercial firms can read you via the data that you produce should be beneficial to you. You shouldn't be forced into making yourself visible to those who mean to exploit you or do you harm. It seems so basic and yet we don't really have protections framed like this right now. Second that our engagement with technology should be voluntary not coerced. So if we don't want to use a Dihide we should have a route around that where we can still use paper systems we should be able to go visit someone at the Chameinte and ask questions. Yeah no you're right so that's also changing. There were claims made about how we were not able to just say no to cookies. We couldn't disengage. We only had the option of yes or click click click yes. And so this is what I mean when I say engagement with technology we should preserve our autonomy we should have the right to say no in every situation. We shouldn't have the right to say no to being part of the state that's the social contract. We have to allow the state to engage with us we have to engage with the state in return. And if we don't like things we should vote out the government that's how it works. But we shouldn't be forced to do it digitally because then we add other actors into the picture who do not have our interests at heart and who we cannot vote out. And lastly non-discrimination which is related to the issues we've just been discussing where we shouldn't be responsible for making claims about things we can't see and we can't identify. And so if we're being discriminated against through financial technologies through state technologies through welfare technologies it should not be up to us and here we come back to Tuslaan Affair to identify that we've suffered discrimination and to push our case all the way through the system. The government has a responsibility to make sure we are not discriminated against. It knows it. In the Netherlands this is a constitutional right. It's number one article in the Constitution that we are all equal. Okay but it's a constitutional right in the Netherlands that we are all equal. Under the law we are all equal as citizens. And so the government has a responsibility to obey the Constitution by ensuring that our technological lives also we are treated fairly and equally. And it shouldn't be up to us to identify when unequal treatment is happening because very often we won't know and we can't see it. So if we can achieve these things together we will move towards a state that I would term greater data justice. But it's also a moving target and new problems will come up every day and every year and every generation. And so my group is trying to produce principles which we can carry through over time and which we can develop into ways to triage legislation and to triage governmental behaviour and to try to understand what direction we're travelling in in a direction that is essentially exploitative and oppressive or whether we're getting a grip on our digital lives. Thank you very much and I really welcome your questions and frustrations.