 have a display. So I have a lot of beautiful slides. A lot of that's right in two on the 10 hour flight over here. And maybe I'll post them online or something like that. So it doesn't all get a waste. So we're gonna have to kind of, we'll do this more verbally. Maybe I'll hit up this whiteboard if I, if I, you know, have the courage to draw on the fly. But anyway, I'm Johnny and also I think, excuse me. Is a tech guy coming then? Yeah. Okay. Okay, so we'll see what happens there. So I'm Johnny. I work at Uport with these lucky people here. And today I wanted to talk about the UX of privacy and informed consent and how exploiting users of rationality and things like that can cause a situation where we end up opting into a surveillance state that we, you know, all say we don't want, but we get voluntarily something different from a state run surveillance apparatus like the Chinese social credit system. And why this is more of the threat for most of our, most countries in Europe and in America, things that are run by, you know, I don't know if you've heard of this term surveillance capitalism, but it's the idea that, you know, the underlying principles of the advertising model of the system can cause us to get this, you know, worst of all outcomes situation where everybody is surveilled and they do it, they do it voluntarily, even though we say we don't want it. I'm actually going to look at these slides just so I follow the plot. So I had a very clever intro to this where I showed you a fake application about a DeFi credit score and you connect your Facebook to it and then you, then I go on to tell you how this is the exact flow in a different context that was exploited for Cambridge Analytica. And what they did was not a failure of any privacy technology, it was a failure on the part of product designers to, or maybe not a failure, maybe they got exactly what they were trying to design for, which is trying to get users to give up more data than they were comfortable with using kind of what you would call in UX dark patterns. So, let's see here. So yeah, and if you, you know, there's also a slide here around what would it have looked like to try and convince a user to not opt into this, to this surveillance. And in the Cambridge Analytica example, if you're not familiar with it, users were taking an online personality quiz and they, you know, you get that Facebook login screen where you say connect to Facebook and it says something like Facebook will get your profile information and it also said Facebook will get access to your friend's profile information. But nobody, nobody reads this, it's kind of the fine print and it has done nothing to stem the flow of data into Facebook's system even though it's kind of more of a cover your ass type of UX, right? It's implied consent rather than informed consent. So there's this, one of the interesting properties of decentralized identity and self sovereign identity that kind of raises the stakes in terms of product design around these consent and disclosure choices is the fact that we've inverted the model of our relationship with the control of our identity. So, you know, in the Facebook example and the Google example, they own your identity and they take on a lot of responsibility for stewarding that identity and they don't always act in our best interest. However, when we move to decentralized identity, we invert this model and your identity really in a paradoxical oxymoron kind of way, your identity becomes centralized on you, right? You are in control of all of the data. You are a single point of failure in IT security circles. There's a common phrase that the most vulnerable part of any security system is the human, right? They're able to be exploited in various ways and this becomes an even bigger risk once we centralize that point of failure on the human by introducing decentralized identity. So, this puts product designers in a, you know, a precarious position. Well, I guess more of a, or in a heightened position or more important position in these flows. So, a lot of damage now can be done by getting people to disclose things, especially if you convince a user to disclose something and they don't realize that it's going on chain and what that even means. So, we really have to avoid that. So, there is this common term that we probably all heard about and it is called the privacy paradox and it's that users often say they want privacy. They're really pissed when you violate their privacy and, you know, in Equifax or, you know, Cambridge Analytica or any of these other high profile situations and then you watch their behavior and then they go on, they keep doing the exact same thing over and over again, which is, you know, giving up a lot of their data. And there's a couple psychological things that cause this to happen. One has been pretty studied extensively in behavioral economics. It's the risk perception gap and it's this idea that users are really bad, or not users, humans are really bad at anticipating risk and measuring risk. And one thing users do is they think, oh, it'll never happen to me or even if there is a huge data leak, the idea that, or the chance that I'm targeted isn't going to be, it's pretty low. I'm just one small drop in a sea of data. And it's also, there's these compounding effects that happen when you do disclosures over and over again. And lots of studies have shown that you can de-anonymize data with as few as like four or five data points and a whole sea of anonymized data correlate those things and it's trivially easy to de-anonymize these types of things. And it's really hard to conceptualize that for most people. That's something we have to be aware of. And I'm sorry you don't get to see all my fancy animations. So the other thing we hear a lot from our research at U-Board and, you know, anybody that has done anything related to privacy and talked to people about it is another thing which is I've got nothing to hide. Right? So lots of people say this. I don't know if this is going to work. Okay. So there's this idea of I've got nothing to hide, that people say this all the time. And that comes from a improper framing and an improper conceptualization of what privacy is. Privacy is often looked at as hiding bad things and it's also looked at at an individual level. But I think one of the things we have to start thinking about, oh and it also comes from people thinking that privacy is this binary and anonymity is this binary. But something we've learned is that privacy and our privacy and anonymity are these spectral things, right? And movement along these spectrums. Gosh, you got a whole team of people. Privacy and anonymity are these spectral things. And really what privacy is about is about some transparency and control and the ability to move along this spectrum in giving context with as little friction as possible. And that's the type of thing we want to enable. So people don't feel like they're either anonymous or they're either private or not, right? We want to contextualize privacy as much as possible. So, I don't, okay, I need to get to a slide. Let's sit here. What you guys want to see, I'll zoom through from the top. You need presentations. So anyway, this is what I was going to show you up top. An application, DeFi credit. We've all seen this log in below. Log in with Facebook. Get a credit score that we can use for DeFi lending. And then I reveal that this is a joke. And you all go, oh, wow, that was clever. Cambridge Analytica, blah, blah, blah. Is this what we would have to do to get a user to stop from sharing? I don't know. Doc Palsons surveillance, inverted model. This is a fancy word and I skipped over because I didn't want to explain it. Cool, right? Okay, now we're back to where we're going. So, this is a quote from our research most recently that me and Adia, the other designer on the team did. And we heard from one of the users that, you know, if Facebook has my data, I'm okay with it. But if Facebook is selling it to someone else or some other third party, right? I'm uncomfortable because I don't have control of it. And this gets to this idea of this sense of control that we have to, you know, at the UX level, try and give users. And that's kind of what really throws them off. It's not necessarily some objective level of privacy exposure or risk. It's about feeling helpless in terms of managing it and controlling it. And that brings me to, there's this idea of UX called dark patterns. And these are UX patterns that designers use, you know, in an unethical way to try and get users to do things that aren't in their best interest. One of the ones that is relevant to this whole topic is something called privacy suckering, which is kind of what I showed you at the beginning, a flow where a user doesn't realize how much privacy they're giving up, how much data they're giving up, and this is done, you know, not in their best interest. And it's because of these dark patterns, because you have this decision of how to design these flows that design is an inherently moral act. There's no offloading the responsibility in just doing your job or whatever, right? So it's incumbent on designers at the forefront and the front lines of this battle for privacy and against surveillance for us to have some ethics about how we design these things. And to think carefully not about just using the norms of surveillance capitalism, not just relying on terms of service and privacy policies that nobody reads, very obfuscated disclosure requests like the Facebook example, and we have to come up with a better model. So, again, this is about moving through that spectrum. This is one way we need to think about privacy. The other is about something called contextual integrity, which is some, I was some research by an academic named Helen Nissenbaum, and the way she frames contextual integrity is, for privacy's sake, is these five principles. You have a data subject, you have a sender of data, you have the recipient of data information type of transmission principle, and it's when one of these five things changes, that's when a violation occurs. So it's about flows of information, and that is kind of what our user was getting to when they say, I'm okay with Facebook having my data, even though we might argue and try and say, like, even that's bad, but they're concerned more about when one of these things changes, when it starts going to some party that they didn't know about. So there's also some interesting research around kind of the difference between hypothetical and actual disclosures when you ask users about privacy and about their information, and the difference is in framing privacy as relative or objective. So the research showed that, and the privacy paradox is this right here, this conflict, which is in hypothetical situations you ask users, you tell them how much what their privacy risk is for a given disclosure, and they say, I'm not going to do it, I would never give up all that data. And then you have them, you know, they say, give up all this data and we'll let you do this personality app, right? And then they do it. But what we can do is we can frame, or what the research shows is that users behave in a more privacy-preserving way when the risks associated with their privacy are framed relatively. And that means that when you frame the risk of a disclosure, what you're giving up relative to what the norms are. So if you tell a user doing this, instead of telling them doing this, has this objective risk, like you have, I don't know, some 85% chance of getting hacked or the information getting hacked, you say this thing you're about to do is going to make you more or less private based on your previous behavior or more or less private relative to your peers. Then users typically will engage in more privacy-preserving behavior. So that's one thing that we can implement at the UX level. So there's this model of consent that we have now. And a lot of it is implied consent. This is the terms of service and all of that, where nobody actually really consents to any of it because nobody reads it, but it's implied that you have consented to whatever they want to do with your data because it's buried in the terms of the service and the privacy policy. We've talked kind of like about moving to informed consent, but informing people is hard. It's hard to get them, you give them all the information in the world and they won't read it, they'll ignore it. You can put the fine print and the login with Facebook button and it's not going to help. So at Uport, kind of operate on a more progressive consent model. And this is actually a common, I guess this is like a twist on progressive disclosure, which is a common UX pattern, which is just in time notices. You want to contextualize all of your disclosures rather than giving them a wholesale disclosure around your data. You want to do it in bite-sized chunks and in the moment when it makes sense. So I'll show you a couple of things we're thinking about at Uport, at the interface level. Just, if you've used Uport before, you know that one of the key interactions in the Uport app is this selective disclosure and this is getting users to affirmatively disclose things just in time in a progressive way by app and interaction by interaction. Defaulting information not to be shared and having users opt in actively in the moment, having users a sense of control over what they're doing addresses some of the things. So you're able to, this is a, you know, maybe you come to Defcon and you would be able to share your name and your ticket. All right, this is something that you could do. We did something similar at ETH Denver this past year in February. This is an idea of something we're working on and thinking about around after you share things in the moment, allowing you to layer preferences and defaults and build a robust and complex set of privacy preferences over time rather than having a user try and set this all in one go, right, up front. That gives users a sense that they're usually on their way to try and do something else and this type of thing, putting all of that up front, most apps like Facebook and Google whatnot, they know that users are just going to opt into everything because they're trying to get something else done. So we need to have users set up these privacy preserving preferences in smaller chunks where they're easier to digest. This is kind of how I'm thinking about displaying data inside the app. So one thing you'd be able to do is see who you've shared any given piece of data with, the ability to revoke data under GDPR law. You can automate these types of requests to get... I won't go into details, but using signed messages we can log the request if you have made a request and then whoever you made the request to needs to respond and fulfill it about a time's up. Okay, so I'll go through that and rest of this really quickly. So, giving you some control, allowing you to set some of these preferences. So to recap, aggressive consent, give users a sense of control, frame privacy decisions relatively, privacy preserving defaults, allow... I missed a W there. Allow users to build preferences over time. Finally, we just launched a new demo today so you can go try out some of this stuff. Not all of those screens are live in the app right now. They're still part of our ongoing research, but this is live right now. If you go to ecosystems.uport.me, you can try that out and see how Uport can enable a lot of these new sharing use cases and new applications. And we'll be with our shirts and our swag here in a bit out at DevCon Park. So, thank you all.