 Thank you so much for inviting me here. The talk is going to be about contextual integrity, as you can see. And I'm so delighted to be here because I'm absolutely convinced and more and more convinced that the privacy problem that we're all confronting today is not going to be solved by one discipline alone. And the only way we will get a handle on it is if multiple disciplines and even within those disciplines, multiple fields will come together and engage. And hopefully what I strive to do in my own work is to create some sort of bridging between the philosophical and ethical discussion and those two law on the one hand, law on regulation and also technology on the other. And I've had really the good luck of working with some fantastic computer scientists, both on the theory side and on the more systems applied side. And so I really look forward to engagement. I should say I'm a philosopher, you heard I'm a philosopher in my PhD and philosophers find it to be a failure if you present something and nobody disagrees with you. So I'm going to try and leave. The hope is that I would leave about 15 minutes at the end of the talk so that we can engage and you should please feel free to ask questions and debate what I'm saying. I have just arrived from, oops how do I do that? It's not responding. Where? Sorry I didn't switch it on. Theory of contextual integrity. And I just came today, actually last night from a small symposium that now we've had the second symposium on contextual applications of contextual integrity where I was really delighted to see that people came from different disciplines and brought different methods and were continuing a conversation about that pushing out some research and in the talk I'll kind of wave a hand to some of what we've been doing some of which I'm involved in myself but a lot of it I'm pleased to say does not involve me. So what was my way into privacy? A lot of philosophers got into privacy maybe in the 60s and so on because this word had been emerging and many people engaged in this question of how do you define privacy. My entry was not that. I was a PhD student at Stanford and I was seeing the emergence of many of these technologies and of course this is really updated sort of today. Mine was not that. I was rather interested in the technologies and some of the applications of the technologies that were affecting our lives in all walks of life and particularly interested in those occasions where people said, oh my god, my privacy is violated. Whatever was happening, the freak out that people were having as a result of it, they labeled privacy. So I wasn't so much interested in a philosophical deep dive as valuable as that is into the concept which is what a lot of analytic philosophers do and it is valuable to gain clarity on terms but rather I was trying to look at the family of episodes or systems or occurrences to which people responded with this particular freak out and labeling it in this way. So what I've tried to do over the years is to say how do we achieve meaningful privacy. That is to say defining a concept that's true to that experience, defining it in a way that we could argue that whatever it is that people were caring about when they said that was worth defending because sometimes people freak out and you say stop freaking out that is not worth defending but it seemed to me there was something there and so I wanted an approach, some definition of whatever this thing was that was not only clear and hopefully rigorous but also could offer an account of why we should care about it and why we can advocate and protest and have a reason to defend it and then some of the things that followed was the ability to give it formal expression so that it could potentially be some kind of implementation and enforcement was an added bonus because then potentially we might be able to embed or follow what later has been called privacy by design. And I want to just take a moment to acknowledge and there are probably even more names but many of the people who I have worked with over the years because when I mentioned earlier that I didn't think one discipline is the crown jewels and can solve this problem by itself, there are many people and these folks come from all walks of life and potentially you recognize some of them. So the way I want to introduce contextual integrity to you is to highlight four key ideas because I like to think it's a big and complicated theory, there's a book if you want to delve into the details but now there are a bunch of articles where I realize I made mistakes in some of the previous work and then I try to correct it and amplify and so forth but the way that might be useful is to present you these four key ideas so that if you're not bothered by the details you might still find it useful to know what these four ideas are. And what I want to also do so that you can in addition to knowing the four key ideas also keep in mind what these compare with or how contextual integrity as a theory of privacy was an approach to privacy is differentiated and in some ways it's kind of subtly different and in some ways it's really different. So this is the main idea and if you need to leave after this this is the most important thing for you to remember and take away from this lecture and that is that contextual integrity or privacy is the appropriate flow of information and the idea is there's both a positive one and a negative one the positive one is to say as we all know information, data flow is really important and there's a lot of benefit. It's the way society operates. If you want to live in society then information about you is going to flow. So if you want to protect privacy you don't want to stop the flow of information you don't want privacy to mean the same as secrecy which is the stoppage of flow you want privacy rather to be the appropriate flow of information. Now this runs against some of the ways I've heard my computer science colleagues talk about privacy and I've just spent about six months at the Simons Institute and I had really fabulous discussions with many of the theoretical computer scientists there often computer scientists will present a theory or present a tool or present a method and they'll talk about leakage. They'll say how do we prevent leakage? Contextual integrity doesn't like that word if you just use it on its own as if leakage is a bad thing, leakage is not a bad thing it's only bad if it's inappropriate and so even some of the privacy engineering community have a principle called data minimization and contextual integrity says that data minimization on its own is not necessarily a good thing you need to minimize appropriately and so if you start off with that idea, appropriate flow that is where we begin with the definition of contextual integrity. Now of course you're sitting there and you're like oh well what's appropriate flow and now we enter into the next key ideas of the theory. One, and I haven't miscounted I just want you to know but there's an interesting connection among these four ideas and there are two definitions or accounts of appropriate flow. One is that appropriate flow is flow that conforms with social norms and in the legal domain we often hear of things like reasonable expectations of privacy. So when we talk about a norm I've also come to understand that computer scientists and mathematicians use the term norm in a different way and a lot of the disagreement and confusion was that I was using the term norm in one way and folks were understanding it in another way so if you prefer it could be just rule, think about it as a rule and we live in society there are lots of norms that govern our behavior governing our behavior even at this moment and some of them are explicit some of them are embodied in law so you'll often have a legal rule that embodies a social norm sometimes you'll have legal rules that are just legal rules sometimes you'll simply have conventions of expected ways of behaving and so on So one of the claims is in the early days of Google Maps Street View there was an outcry because people didn't like the fact that the cameras showed various things and so people objected to it and they said it's a privacy violation and the defenders of Google Maps Street View said oh but you know we're only showing things that are public in public roadways and in fact the argument is no actually there is a difference in the flow of information that's caused by Street View because you don't now have to be standing in front of someone's house you can be etc and so that particular technology had violated people's expectations and often the expectation that's violated is a signal of a norm, a social norm and then what I will come to later is the fact that appropriate flow it has a normative content that is like an ethical or prescriptive content you will often call something appropriate if you think that it's legitimate and so there needs to be an idea in there that also explores appropriateness as a normative concept so I'm going to concentrate I'm going to start with this idea which is that appropriate flow is flow that conforms with norms or rules it meets our expectations and you see this little word entrenched and it's quite an important word because there are lots of norms and I'm sure I could identify some of them that are entrenched their expectation, nobody wrote the rule down but we behave in ways that we learn we absorb from society and those are the norms that I'm calling entrenched and those are the ones that meet expectations so if we say appropriate flow conforms with entrenched norms and what this goes against those of you who are somewhat familiar with privacy policy are the Fair Information Practice Principles which were devised in 1973 and then were ultimately embedded in the first Privacy Act of 1974 which was basically a tragedy because although the commission in 1973 said here are the Fair Information Practice Principles we need to embody them in an omnibus privacy law what happened instead was that in 1974 the law was passed but it only covered governmental databases and it left the whole commercial domain I mean that was 1974 and people maybe didn't have a sense of what was to come but anyway, it goes against the Fair Information Principles and something that we're all very familiar with which is the informed consent mechanism and it's often operationalised as notice and choice and we call that a procedural approach to privacy it says that if you follow these and these steps then you're going to achieve privacy and the Fair Information Practice Principles which are really the fundamental of where we are today in privacy regulation now some people may say oh there's the US law and then there's GDPR which is the European General Data Protection Regulation also still based on Fair Information Practice Principles although they've added some substantive prohibitions into those rules but it just says okay if you present people with a policy and people agree then privacy is protected it says nothing about the substance of what the practices are and contextual integrity rejects that approach it says we need to have rules, substantive rules about what flows are and are not allowable so why contextual integrity and so we're diving a little bit more into the theory the theory itself my own approach so you can see entrenched contextual norms is to think about society not as an undifferentiated social space but rather as constituted by a whole variety of different social domains and these domains are oriented around or you could even think that historically anthropologically they emerge around certain purposes, goals and values it could be we want to educate young and what has evolved around that is an educational domain or we have health and what's emerged around that is a health care domain or there could be a political domain how do we choose our rulers or our leaders our political leaders what sorts of powers they have over us and so forth and so that although I personally had done a lot of reading in the areas of social philosophy and social theory and so forth I didn't hang the theory of contextual integrity on any one particular there's a lot of social theory that constructs society in this way as constituted by these different domains now this is number three what we're saying is that these contextual informational norms and information flows or practices or policies conform with if they conform with entrenched social norms we say that these flows are appropriate what contextual integrity goes on further to say is that here is the structure of the rule in order to formulate a rule correctly it's an informed rule for the purpose of knowing where the privacy is preserved and I feel like this community maybe understands this idea better than many communities that I try to talk to about this is that you need to provide values for these five independent parameters or like three depending on how you count them and so just for short this is the CI tuple someone told me that this is how you say it and it's Actors Information Type Transmission Principle so when you're describing a practice and you need to figure out or assure someone that this is appropriate this conforms to contextual integrity you need to specify the sender, the subject, the recipient what the type of the information is and the transmission principle and I'm going to say a little bit more about all these different parameters but I wanted you just to see one quick example of what the structure that such a description or rule might look like with all the parameters filled in or values for all the parameters so here are an assortment of... now let me just say this that often the subject and the sender are one and the same and so it might seem like there are only four parameters listed but in fact you have the two identified and so it's not a problem so these are contextual informational norms the values for the parameters are expressed in terms of roles so it's people, not just people let's say people acting in certain capacities and the capacities are very much constitutive of the domain itself so if I say physician you are going to say what's the domain, what's the context thank you it'll be healthcare or teacher, it's education and sometimes it's less... I'm not saying it's all neatly tied up but it's not that the context implies the roles maybe the purposes imply the roles it's that the roles are part of what constitutes a context information type is the ontology of attributes and again to some extent they're connected to the identity of the context and then finally the third thing which is I have to say I think it's a contribution but it also is the most troublesome of the parameters because there's lots of work that still needs to be done on it however what the transmission principle is is the constraint under which the information flows and so what you can see here is that consent is now a transmission principle under this structure and when you share information with consent that is the transmission principle but when you are filing your tax returns and you have to fill in that thing that says gross annual income that is not with consent you have to there's coercion or you're standing in front of a judge, you're a witness you must answer in a certain way and there's sometimes more complicated transmission principles so for example the police can't just barge into your house and go searching around if you're a suspect they need to get a warrant from a judge and then they can enter with a warrant and that is a transmission principle often in the privacy domain people forget about the transmission principle and as we can see with something like with a warrant that is a critical transmission principle and we depend on that to be living in a system where law prevails okay so all the parameters matter here was one of the excellent experiences that I had I was presenting this I was part of a large NSF grant and I was presenting early ideas from contextual integrity many years ago John Mitchell was in the audience and he said oh wow we think we can provide a formal expression of these ideas so what I've shown you is already digested through a lot of the help that I got from the folks on this paper and there was an Oakland paper that was really successful but in sitting with these collaborators they would just ask me questions what about this, what about this and I think that the theory became much more rigorous and precise as a result but what we see here which was rather satisfying is that, and this was a lot due to Anifam Daita is that if you look at actual law or actual rule making in the United States a lot of the rules and this comes directly from HIPAA a covered entity can disclose a patient's psychotherapy notes to the patient only with prior approval from the patient's psychiatrist and so you can see all the parameters being listed in this rule that health and human services came up with and I think I like to present this rule because this shows you that in some cases it's not actually the data subject that fully determines the information flows so just to pause a moment and say the way this all works is that if you're interested in analyzing a particular device or a particular system according to the theory of contextual integrity what you do is you need to describe data flows and this is quite important and quite practically relevant and I don't know how many people in this room have developed apps for example but we've utilized this in very low key settings where people are developing mobile apps and we say okay you need to first of all map out the data flows and make sure you know you're able to provide the values for these parameters and then once you have a description of the flows you can do some comparison to make sure that the flows conform with entrenched norms now just to get back and I'm gonna say more about this but I wanted to now elaborate a little bit more when we subscribe to the idea that appropriate flows flow that conforms with entrenched and contextual rules we've already said that the most universal approach that this country and GDPR too but that's a longer story has taken to privacy protection is through this concept of subject control of information about ourselves some of you, I don't know how many of you have been following some of the New York Times privacy project you know I think it's a great series of articles but we started counting and there about at least I stopped counting after 20 of those essays all end by saying what we need to do is give people control over information about themselves and then the only way we seem to have to do that is notice and choice that's a problem but I argue on the basis of contextual integrity that saying that the whole arena of privacy is control over information about ourselves is to take control which is one transmission principle and make it into the entire domain of privacy protection and this is highly problematic because I argue there multiple transmission principles and sometimes we actually don't have a right to control information about ourselves second another approach to policy making is to kind of divide the world of information into sensitive information or not sensitive information and to say you know privacy regulation only applies to sensitive information as far as all the rest of the information anything goes contextual integrity says no any information and here's my claim any information requires norms or it is bound by norms of that constrain the flow of that information it can be something like your name and we've now done some empirical research that validates that idea and finally if you're building some kind of access control system you need to be thinking about the values for all of the five parameters not four, not three not so to argue the say we only need to worry about sensitive information is problematic because first of all you're only worrying about one parameter and second of all instead of looking at the whole ontology you're just dividing the world into two types of information in the story now when I say that access control rules I have to govern all four the practical situation is that sometimes access control rules make assumptions about the outside world and it may be that you're not able to represent or model all of the five parameters and you need to make an assumption about for example let's say you're building something that's only about men's health then you may not need to specify well I don't know you may need to but if you know that the system is going to be used within a certain domain and only certain parties are going to have access to it maybe through some kind of authentication mechanism then you can get away with using the external device to already constrain for example who the recipients are and then the rules within that system don't need to specify all five so when I say you need to specify all five I'm realistic about the fact that sometimes you're not able to represent all of those five within the system and you need to control some of those parameters outside of the system and I mean anybody who's model anything knows that it's not just a restriction on this kind of thing so this is just a slide that says here is the work that we need to do having gotten this far with contextual integrity we need to I talked a lot about entrenched norms some of these entrenched norms are quite intuitive and some of them are expressed in law and so we can figure out what they are but because we're in this domain so some of the controversy surrounding the platforms for example or mobile apps there's a mess and some of the more predatory data practices take advantage of the fact that we haven't, we as a society we haven't really figured we haven't done research which teaches us what the robust implicit norms are and so this is really, okay red here's the code red is what I think is mainly in the domain of social scientists and communication scholars and so on for us to learn more about what our expectations are and there are tons of studies now Mechanical Turk is the favourite platform to try and extract and be more precise about what some of those norms are the blue is what I think mainly is in the domain of computer science and it could be systems people or security folks, networking and theory crypto to not necessarily only think about the case, you know the standard case of Alice, Bob and Eve but to appreciate and I'm sure you do some of the much more complex circumstances so at the moment I'm working with some folks on smart or autonomous vehicles and if you think about those vehicle communication systems I remember way back about a decade ago working with Dan Bonet who had some really clever cryptographic system to say well you don't really need to know who the driver is but you need to be able to identify the car for purposes of coordination and that's I think what this community is really expert at is to provide ways of constraining information flow that respects the norms that respects the constraints while allowing systems to continue functioning not grind to a halt and then you know these other these other activities the purple which is the red and the blue together is ways in which I think it's really important for us to work together on a common problem now if you're thinking about just with the lens of privacy as control your life is a little bit easier when you operationalize control there's just this one thing you have to operationalize which is giving control and of course as we all know notice and choice regime is highly ineffective and we're sitting with this policy because it's very convenient for the information platform providers and other you know mobile platform providers it's very convenient because it pretty much allows them to do anything and what we need is unfortunately more complicated but I think would give us much better privacy because the work we are doing within our group is empirical studies that question some of the ages old assumptions about privacy and I don't have so much time so I'm not going to go into it and here's another where we have a platform that tries to govern information flow now within an institutional setting so we don't have to take care of all you know a ton of values parameters and there's just lots more to do and as I mentioned other people are also doing it now I don't know you know it's hard to follow a particular talk just if it's the first time you ever encountering it but there's something that I'm wondering maybe this is really bothering you and that is that the whole reason we're here today I mean I'm here today is that is technology and the technology is disruptive a lot of this technology that we all love or love to hate has disrupted information flows that's number one and number two it may be that we have flows that sorry I'm just that we just don't have norms for and they're troubling but there doesn't seem yet to be common agreement about what the right thing to do is and so basically whoever is the more powerful or whoever has the neat app is going to be the one that defines the flows and we just sit around passively hoping that nothing bad is going to happen to us and there are a whole lot of other reasons why the entrenched approach is problematic now I had in early days not actually developed this part of the theory because I was really in love with the idea that society has these entrenched norms we look at the Hippocratic Oath we've evolved these norms over centuries and there's the wisdom of the ages and we should go with that but you can't be conservative in that way and say no change is possible and everything's a privacy violation if it violates an entrenched norm so we need some approach to march into this new domain and be able to evaluate not only our own entrenched norms like slavery is an extreme example but also to evaluate new practices new institutions that are being thrust on us where let's say Mark Zuckerberg is saying oh Facebook has changed the norms and we're sitting there saying no you've changed the practice but what basis on what basis are you saying that the norm has changed and so now coming back to our diagram we're going to look at this number four because now we have got a number three which is how do we look at some new practice or even an entrenched norm and evaluate whether it's okay or not and I use this philosophical concept of legitimate but it can be something like is it worth defending is it morally justifiable and contextual integrity offers three steps to thinking about this and you know I have just not too many minutes and I'm going to go a little bit quickly the interests we need to do a kind of economic analysis when we look at some of these new practices and say who's harmed who benefits standard economic policy analysis and then there's a discussion of political principles you know does this diminish freedom of speech and we've seen a lot of analysis of this in the media is it biased does it create unfairness and that's the second line but it's really the third line that I want to emphasize which comes directly out of the contextual approach which is you know contextual integrity which is what I said at the beginning of the talk is that context are constituted around purposes and values so imagine the Martian landing in a university and sending back the description of what a university is or what a school is describes everything that's going on but doesn't say what we're doing in a university what a school is trying to do the purpose is I think we would all admit that they're missing a major point about these basic institutions of society and so what contextual integrity argues is yes you discuss interests yes you talk about some of these very highly abstract values but what's also important is contextual ends and purposes and here are just a few I'm not going to go through them but I also wanted to show you that this is an intuitive idea and here in 1925 when tax returns the tax information that we provide to the IRS went from being public information to being information that was now held in enormous confidence as we've been learning increasingly in the past year and a half by the IRS and Andrew Mellon's argument was not like oh you know people have a right to privacy in their tax returns his argument was we want people to say honestly what they earn, what they're worth and that's important because the Treasury could benefit from taxing them and so one reason why people might want to hide their earnings is because it's being blasted to the world imagine your family, your large family learns that you're really quite wealthy and how many people will come and hit you up for loans and what not and he succeeded that argument succeeded so we're saying healthcare is the same if someone has a contagious or infectious disease may not report for testing and that's problematic not only for the person but for societal health and so we insist on very stringent privacy regulation in healthcare in order to promote health and so forth now this is, here's the not part of it this is different from some other approaches who insist that privacy is only about the data subject actually there are two things and please do those of you who stayed are still here the first really important point to take away is that it's not secrecy, it's an appropriate flow and the second important point is privacy is valuable not only for the individual but for social integrity as well privacy promotes contextual ends and purposes and that's why it's important so all those times that you hear people say oh you've got to give up some of your privacy in order to improve health now that you've sat in this room you have to reject what they're saying and say no because an adequate approach to privacy will say that an appropriate flow of information has to be shaped so that it can serve individual interest but also is well designed if it also promotes the contextual ends and we've been looking at some examples of that in politics and education and so forth and the third well you know a lot of people who do differential privacy will say you know privacy versus utility and I hope that we can get past that because privacy is not instead of utility privacy can serve utility but you just have to stop thinking about privacy as secrecy, as stoppage of flow but rather appropriate flow and sometimes that will serve utility so I wanted to bring up this because many of us have seen these images through the lens of contextual integrity the sin of Cambridge Analytica was not that this information was shared without consent of the data subjects it was a problem because it destabilized democracy and when we try to solve it we don't say hey Facebook, next time ask people because you know that 95% of people are not even reading and they're going to just click you need to think, we need to think about how this is going to affect these deeply valued purposes so I mean here are just a few little examples which I'm not going to go through this is my last slide I can come back to those if you like but I did want to leave some time for discussion the heuristic that we come up with for practical purposes is to discover the relevant norms or expectation contextual norms of expectation map flows in terms of parameters check conformance and perform a legitimacy analysis now having built the big picture we still subscribe to what I had said earlier about what contextual integrity needs us to learn to be a functioning theory and that is that people like myself and legal scholars and so forth we have to understand the for example with the Facebook, Cambridge Analytica we really need to dive deeply into the consequences of these four individuals, social values and how they affect some of the constitutional laws in the case of the US some of the values, the Bill of Rights to which we're committed to and presumably different countries and different cultures would have different norms we need to move beyond this obsession and I can say that the number of articles that have been written about let's analyze privacy policies and compliance and so forth is kind of mind-boggling and I don't think in the end it's going to serve privacy very well we need in my opinion economic and game theoretic methods for figuring out what optimal policies or substantive rules because sometimes you know just very un-creative laws are not necessarily going to give us the appropriate flows that are legitimate and important and then of course I think back to what we can do in the computer science and crypto community is help us figure out smart ways to enforce flow policies and express them in technical systems and devices oh I guess that's the end thanks a lot is this on? so it's an interesting way of thinking about things and thank you for the talk a problem that we face is that we do need control because although you can talk about appropriate flow there's a problem of preventing inappropriate flow that's very hard some other complications are the lack of permanence the rules change the data is owned by new people and the interpretation of appropriateness disintegrates over time and then there's a monotonicity principle the amount of information that is disclosed about you simply increases over your lifetime so maybe it's a secondary part of your theory to address some of these other problems I'd like to know your views no those are all very good points when I talk about control and maybe I wasn't clear enough I mean control about information about ourselves by the data subject and what I was objecting to is that where we are today for the most part is that we think that a bilateral agreement between the data collector and processor with the data subject is what privacy amounts to and the argument here is first of all that is not going to be for the benefit of the individual especially for the reasons that you raise about how complicated the data environment has become and second of all because there's the societal impact and sometimes people don't have a right to control information about ourselves because it's not beneficial for promoting societal ends and you may disagree with me on that score because you may just think that on the issues of information about ourselves it's sort of like nobody's going to tell me who I can marry for example it's just fundamentally my choice then we just you know then we part company I mean I'm just responding to some of your points the question of change is precisely what that second part of the theory tries to address because norms also change over time circumstances, hurricanes we become smarter we become hopefully more ethical we have to have a system that allows this change to take place and this is an attempt to give some kind of systematic approach to that Thank you very much for your talk a very refreshing view and I think much more productive than a lot of the discussions I've seen so far I'm a bit confused you seem to focus very much on the flow of data and to me the use of data is a much bigger issue than the flow of data maybe you would encompass it in the flow but that wasn't it? Thank you for raising this very difficult important question so it's and it's a debate that started out early the theory describes these norms which are norms of flow and the critics of that even friends said Helen you've got to do something about use and I said I don't need to do anything about use two reasons one is that a physician or the role implies use so in saying physician we have some implicit assumption about what the physician is doing with that data and second of all with the outcomes of use we can capture that when we look at the evaluation of the norms interest values and contextual purposes which was fine until about 10, 5 5 years ago when actors large platforms and information services companies really blossomed and became these global giants and accumulated data from all walks of life and didn't think twice about just pulling all the data and we didn't know what they were they didn't have a fixed role in 19 I don't know when it was 25 85 there was the telecommunications act in the United States and they defined the role of a telecom provider they say this is what it is and by the way here is how we're going to regulate your data flow you can't record calls you can listen to a few snatches for quality of service you can't record metadata well I'm just calling it metadata and so they stipulated that but we're living with this and the problem here is that these, because we don't know these parties they're just using the data in multiple different and I'm thinking non-contextual ways excuse me, sorry and so I now agree we need to have use but yes, I think we now have to have a use parameter and I'm working with people I mean how are we going to do it I actually think that a lot of your theory just applies to both things and maybe we have talked about flow talked about processing of data encompassing both the use and the flow or you can define the flow within an organization but that's very easy to misunderstand yes, and how to do it is still something that I'm thinking of but because of where we are today we need to have a use parameter thank you thanks, so actually just to follow up on that someone I think they wanted to bring up so first I very much appreciate your perspective I think it's great to think about it as flow as a contextual thing and to think about data as something that doesn't necessarily belong to one person or another you know it's the right way to think about it but so I think the use is at least two different components that has to be addressed one is the aggregation of data you know the whole is much more than the sum of the parts and as you say all this and the no theory will be meaningful without addressing that and as practice showed us and the other thing is that the flow of information should be thought of as circular right because it also comes back to us it's not just what flows away from us what comes to us right when I've been shown ads that have you know, biased or whatever that's not what comes out it's what's coming in and the whole thing is one big ecosystem that cannot talk about one without the other it sounds to me so I won't address the coming in part but I do want to mention two things I have an article if anyone's interested it's called contextual integrity up and down the data food chain where I look at how we have adapted to certain this problem that you mention everyone you know doing machine learning and so forth is really relevant that sometimes we can adjust norms to learn from what we see so I think we were very good about social security number back in the day because we saw that that accumulated a lot of information around it and then we could do all sorts of things and then suddenly you know everybody was asking what's your social security number stop because we realize how revealing that was however I think we're moving in such a speed so in the cases where the norms can't catch up to the practices precisely because of what I'm calling the data food chain we need to then I think their flow won't help we need to go directly to purposes and we need to say is it worth it that companies are surveilling every move we make online and now increasingly with smart cities and that in physical space is it worth it so that I can get an ad that even if I'm interested in it expose myself to this entire system and that's the kind of question is it worth it is it worth it to do that if it's going to stop plagues maybe so we need to do it responsibly we need to have that discussion of what's worth it and I don't think we are yet my question is related to the information processing issue and I'm just wondering how you deal with rules that might be just contradictory to each other but you can imagine that we think it's okay for information X to be released to someone and we might also really not want information Y to be released without realizing that Y can be inferred from X so how you would address that kind of issue and I mean this is where having the additional I can't you know it's a big problem and one of the issues about having these platforms that are gathering you know information 360 degrees with just no we have this political economy of information at the moment which is basically if you own it you could do what the heck you like so you can you know buy a blood processing lab and you can be talking about a consumer practice and there's no sense that there actually should be rules even within single ownership that stops that kind of processing and merging of the information I mean it's sad because we've kind of it would take a lot of rhetorical force to push the dial back a little bit so but there's a little help you can get from extra parameters you and that which is to say I'm prepared to give X to A and Y to B but I know that if they merge this data like data brokers we have a problem and if we think contextually in terms of ends and purposes and values I think we can protect against some of it but not all of it so thank you I don't know you were next really interesting talk thank you very much thank you one of my questions though and perhaps it's because I'm misunderstanding when you talk about the attributes the sender, the recipient and the subject especially with your example on the psychiatrist it didn't talk about data origin so in that particular example the psychiatrist was the origin of the data and so clearly should have controlled the notes so I'm wondering why data origin isn't part of those attributes because in some cases right it's method collection and other things that are being protected by the data and I do think that there is a kind of this leading way in which we think about data as like just stuff lying around that people go and gather when in fact data is very much a construct and strange you should say it but we had a small darker ground to the few of us and we started talking about origin it may not satisfy you but if you if you look for that I think we had a public I think it's out already but we did talk about it's not to say that the originator necessarily sort of owns but that the originator of the data is a relevant question to ask and can affect the flows so I mean that's all I think you're right yeah yeah so thank you for your talk I was wondering we were discussing how contextual integrity chooses not to see society as one cohesive I think in your second point I was wondering how this interacted with the lawful side of things does it for example advocate that different groups you have different norms about appropriate data flow should be subject to different laws because it seems like that could be rather a bit of a Pandora's box I'm not I didn't hear every word you said but I'll try answer and then you can tell me if I got it right this is an interesting question because many people will say that Europe is much more advanced in thinking about privacy than the US because they have this kind of omnibus approach to saying we're going to have one regulation like one size fits all actually it also devolves to something called purpose binding and purpose can sometimes map into what I call contextual in the US we have what's known as the sectoral approach where we do in fact have they're just bad the problem is that they're bad but the general idea of having a slightly lower level but more specific set of regulations for some of the major domains I think important and the reason it is is that someone like myself who's a privacy theorist don't come and ask me what the rules should be for finance or for healthcare or for education the people who are writing those rules need to be experts in those domains so that's sort of to put the positive spin on thinking of society and law does it too law, there's commercial law, there's private law, there's public law so even though we have the constitution that has some very high level abstract values that where the other laws really need to you know cause you challenges to laws based on conflicts with the constitution I think a privacy rules as a little bit like that and I would like to see those sectoral rules improved a lot and not always have this little switch that says these rules hold blah blah blah except if the data subject gives permission for something else which if you look at all the laws that we have on the books has this weird exception probably not in some of like national security regulation I'm just curious about something which was on one of your slides so what's the problem with the number of spoons in your coffee? What's what? In one of your slides there was a problem of transmitting the number of spoons of sugar in your coffee. That's the problem about this. I did? What else did I do? Actually it was just before the sexual orientation. And I thought to myself that I'm just doing this kind of thinking which is I wanted in the work in which I challenged this dichotomy of public and private I was arguing that there's information that's because I said like there's actually many different dichotomies and there's information that's personal there's information that may be sensitive but that was before I went on and said actually these contextual ontologies something like how many spoons of sugar you have in your coffee you could say is personal but it's not sensitive unless it's a health insurance company in the sense that if somebody learns that you could face harms this was early argumentation against the claim that you can divide the entire world into the public and private I'm sorry I just blanked on that one yeah I think you're not addressing the real problem this is the deck chairs on the Titanic problem we're asking detailed questions and the boats taking on water and we need to address Zuckerberg you know how are we going to do it because we're a country of laws so we need to provide the way in the reason I use that Cambridge Analytica case is that I want to excite people into realizing that it's not just a question of me personally controlling but actually what some of the companies are doing and Facebook is very visible but I think data brokers are a case in point the mobile operating systems I mean there are quite a few companies that are in this space their data practices can undermine democracy now if our regulators cannot respond to that I think that that's a problem and that's the sort of thing we need to communicate because that will get us the foot in the door anyway, thanks for yeah, I know, go for it Thank you very much