 I'm Nettie, this is Ona, we both work at MetaMask. I'm the head of ONA, Ona's a designer and researcher, and today we're gonna be talking to you about ethical design practices for web three. We're gonna start with the map of the territory with some terms and examples from the current landscape. We'll move to some things that we've been observing and we'll talk about how we plan to move forward with what we've learned. The intention of this talk is to act as a conversation starter, how we can work with all of you towards designing a better future for the web. So, the practices of product design are well established. We know that functionality, reliability, usability, and delay for the components of good UX. Importantly, these components build upon each other, so working on the usability of a non-functional product will not lead to the desired outcomes. It kind of seems to me like there's a piece missing from this hierarchy, and it's ethics. Specifically, user control. Without it, we believe there's a fundamental flaw in the process of creating human-centered products. The era of surveillance capitalism. So, basically, it kind of feels like we're being recorded every moment. The surveillance control over what data we share, who we share it with, and how it's used. Our personal data, age, address, phone number, browser history, credit card info, are all in custody of companies that sometimes use this data for their own bottom lines. So, how did we get here? Dark patterns are tricks used in websites and applications that encourage you to do things you just didn't necessarily need to do. You've likely been frustrated by an experience that you got into because of these patterns. They're known near-on availability. So, the first pattern I'll mention is called primacy's occurring. In this case, the user has no choice but to accept these terms in order to use this service, which includes posting tweets on your behalf and control over your meeting block settings, basically giving them full, active user permissions. So, you may have chosen to trust this random application with the ability to post tweets on your behalf possibly because you think that they would never do it, but why have they ever even asked for this in the first place, right? Another dark pattern example is called confirm shaming. So, since when does not wanting a site to serve you as may do a bad person? You've probably seen these dialogues before and possibly have been influenced by this emotional coercion or just accidentally clicked that bigger button. So, Roche motel is when you get into a situation and it becomes difficult to get out of. So, have any of you ever tried to delete your Amazon account? Yes. Okay, so you know it's notoriously difficult how hard it is, 12 steps, and then basically you just have to write a letter to customer service. But the business interest in this case, not letting you delete your account because you pay a recurring yearly membership fee, it's prioritized over your ability to remove yourself from this contract. So, applications are developing, friction was onboarding, but like super difficult offboarding. So, this is a quote from Jesse Weaver. We want empowerment, we want to be better people. We want technology to enhance our capabilities and increase our sense of agency without dictating the rhythm of our lives. Today our world is enriched with incredibly useful services and the technology has enabled, but it's not enough and we're not done and we deserve better. The good news is that ethical design practices are emerging. We're super happy to see that we're not the only ones thinking about this as we build a more user-centric ethical digital. So, just like for you all, these are some resources, Humane by Design and DesignEthical.com if you want to dig deeper into some of these patterns and how to fight against them in your own application. One practice that we're specifically seeing like to pro-privacy movement is sign in with Apple, which allows you to obfuscate your email address away from the applications that you're using. So, not all the big ads are necessarily super equal, I don't know. The other is proactive support. Surfacing features that help you use a product the way that you want to use it. Here we see an example of a notification that is often dismissed and now you have the ability and the menu to stop these notifications altogether. Another part of ethical design is mindful data collection. At MetaMask specifically, we use a service called Matomo, which is an open source alternative to Google Analytics. Using data, of course, is a great way to improve user experience, but it's important to think carefully about the things that you do with the data, ideally not using it to trick people into using their services for more than they need to. So, I've touched on some web two things that are leading into web three, and now I'm gonna talk more about some of the web three things which are the foundations that we build upon that have been part of this transition to ethical design. So, user-owned accounts. Being your own bank is powerful, but it also means being your own risk manager, which admittedly can be super scary. Having the user in control though is the main principle here. Earlier today in this room, Johnny and Kate both gave talks addressing this topic, addressing this topic, so you should check them out after the conference site. They're recording, yeah, cool. Another principle we're building with is transparency. So, we're creating on top of open infrastructure, right? So, it's great for innovation and also for accountability within applications. At MetaMask, another principle we adopted is explicit requests. So, user control is one of our highest priorities, and the idea here is that he tried to show you everything, which is maybe not always what we needed to do, because sometimes when you use MetaMask, your face might look like this. Seriously, this is how I picture users sometimes. And to back that up, these are some direct quotes from my user research. Like, what is this? I don't understand. Why am I being asked for this? I just want to use a map. If there's either I'll pause, but all the inputs, I just mindlessly click through them. So, of course, these are real-life things that users have said. Some of you in this room may have said these things before. So, obviously, we've heard this feedback and we're thinking about it. And the main question I think, specifically what we're asking now, is how might we design such that people can trade off control for convenience as they see fit, and Ono's going to tell you more about how we're actually doing that at MetaMask today. All right, so, are you using, am I on? All right, so we're using three main principles on how we apply some of these ethical design practices in our product. Informal consent. So, we want to ensure that people understand what they're signing up for, granular control. We want to provide people with the right amount of control to manage what their consent is for later on. And then, the third one is treating trust as a spectrum. We'll go deep into each one of them and explain it more clearly and show some examples of how we apply that. So, informed consent. And informed consent is when a person grants a permission to someone after understanding carefully and properly on what they're signing up for, what's the consequences of what they're signing up for. The challenge here is how do we design something without, by making sure that we're communicating the entirety of the commission, but without over-doting people with the information. This is a screen which people see today when a specific permission, which is very often used in that, to ask people for consent, to gain access to their funds. Now, there are several problems with this screen today. And this is one of the quotes of our user. And to go with them, I don't want to be the next idiot to be accepting that. The problem with this is one, this is not very human-readable. So, the ambiguity of this whole thing throws people off. The other thing is it's a fundamentally risky commission. You're asking people if you can get the funds to, you can access all their funds. And that is not acceptable. So, to improve this, one of the, this is a design that we are working on and this will be in production soon. Christian here is a designer who's worked on it. One of the ways that we have managed this is we test this and this permission in one glance tells people in a few seconds, what this permission is, what they get into as a consequence of it is and why it's being asked. We added one more layer of control to this, which is editing information. So, you can set the allowance that you are comfortable with giving access to a particular gap. Granada control, we want people to be able to manage the permissions that they have been giving to different sites when they are using different battery applications. So, how many of you in the show of hands know what gaps you're connected with with your Mademoiselle funds? Yeah, and that's a many, right? Because we really don't know why that as an option today. You cannot view what you're connected with, let alone actually manage and delete those permissions. So, removable permissions are one of the things that we are adding and we want to make sure that we make it very easy for people to manage these permissions. And we do this by intentionally providing this setting in the right place and in the case. The third one is treating trust as a spectrum. So, trust is a really big part of human experience, how interpersonal relationships work. When we start a relationship with another person, we often don't trust them, but as the relationship grows, the trust includes us. You can trust them more and you can release a bit of control from them. This is how people add machine interaction works as well. In web 2.0, it's a super trust dependent interaction experience that we have. You just have to trust the complete with your information with whatever you're giving permission to do. In web 3.0, we have tried to create a trust less experience, but we want to treat this as a spectrum. We want to give the control to people to be able to decide where they want to be with a particular application. So, if they start using an app more, they can choose to delegate control. They can choose to reduce the amount of effort you might put in using a particular app. Share progressively is more of the principles that may follow you. So, when you sign up for an application, you might not know it. So, you don't have to upfront share everything right away with that application. You should be able to pick and choose on what you allow that application to do. This is another event that's going into production soon. And this is just one of the examples and this is just the beginning. We want to do this, extend this to other kinds of actions as well. Another thing that I do want to mention here is, so, these set of permissions that applications will be able to ask from people, metamask will be just an interface. Just to be an interface to show that to users. Now, there are a couple of challenges that we are facing here in creating this framework of a permission system where DAFs will be able to ask these permissions. So, we want to enable and empower DAF builders to ask the right kind of permissions at the right moments to people in the right way. So, that's informed consent. And you ask only the things needed at that point in time for the application. So, when a user starts to trust an application more, they should be able to opt into convenience options. They should be able to automate certain steps. They should be, they should need to have the complete level of control that they have in the area. So, we are thinking of some of the areas here, like authorizing metamask and make this work for five days so that the user can have in-tap authorizations for a certain period of time and have a contextual experience where a lot of difference more and more are visible. Now, when we provide these kind of options, we are naturally allowing people to open up for a certain amount of risk. So, it's important for us to also provide the required safety nets for them. And one of the ways we're thinking is with spend limits. So, we can have spend limits for application. We can have spend limits, like daily spend limits and for transaction spend limits. There might be more ideas. So, if you have ideas, please answer us. We are all yours. This is just the beginning and we are applying, for now, these are the design practices that we are applying in our product. So, comprehensible permissions in some way are easy to manage and revoke. The permissions share progressively, automate steps with the VX and provide safety nets. This is just the first steps and we hope to come closer to an internal where we allow people to be in the driver's seat while you provide them with all these smart controls that they need to have a safe and pleasurable ride. We are doing a small part into this. We are just a small touch point. So, we're really glad to be able to work with a lot of applications to make it. So, looking back to what Jenny said earlier, on ethics is the missing product design principles right now. If you want to appeal to everyone, can you consider ethics by default in the product design process? Keep that as a fundamental part, the core component of product design and hopefully some of the principles that we have shared today would be useful for you in your product design. So, please work with us. This is just the beginning. We already have a lot of applications, zero X, open C, and these are a few names, but it would be really useful for us to talk to you and understand how we can simplify your product experiences so that you can serve the needs of your users while making sure that the security and the control standards for people are high. You can drop me anywhere. There's another email ID for US, I think in general, and we're super happy to get in touch. These are some of the other events that are happening at deaf phones from the mask. So, the next evolution of WebTree Wallets is a workshop we're going to be an announcement. So, please come there, join us in some building, hands-on building, Tan and Eric, two of our team members and ones who are running this workshop. Lavamore was a presentation that happened yesterday. We are supporting Natalie, the collectible event, it's a game, try it out with Mademask Mobile, and you can also find us at the consensus table. There are 10 of us from the team, a president of deaf phones, so please come up to us, we're going to learn more from you and build a better ethical future of internet. So, I have to just slide where you have the slider for the trust. This one, is it like an analog variable, or is it you have like three sides? I think it's an interesting question, and this is something that we are considering here that it should be an analog, which should be defined and built by the visual, as a trusted race of gender in the concept, right? So, you might think of trusting an application in another way than me, but for us it's important to write a framework so that we can put those limits and start using the app, which is very important. So your idea for now is there's a continuous slider? Okay, that's fine. Oh yeah, that is true. This is not an interaction design, so yeah. I'm sorry, maybe, yeah, yeah, yeah, yeah, yeah. Yeah. Can you tell us what was the approach and how ways that these covered those edge calls that you would say spread the lines? Yeah, for sure, so a lot of them is actually just best practices which took a lot of effort to end the job to fill up a lot. And we also do a lot of user research, so we learn a lot from people on what they need to be done, and we realized very quickly from our user research that providing all the control and all these places, there is a process that I need to do today, so I need to prepare the answers, and we need to consider something like this, and that's why we're also technically in the end of the training that's in the infrastructure to start with, if you want to measure changes. What about just creating a brainstorming mode, okay? Creating an optionality mode so that you get the easy and easy user or new user, extra user. This morning in the live peer presentation, they showed that it worked. That it worked very well, but it's duality for new users and for new users. I mean, that's what you have to think about, and we've been thinking for five years about how we might define them. New users and expert users will have their own pockets that give them a limit, and I would love to learn more about this. I don't think I've ever done that before. Oh, okay. But the idea is you reduce amount of information for new users and hide it behind the switch. I'm an expert, you wouldn't do my effort. I don't have to do my face so that I don't even know what's on the other side. I don't know. Yeah, that makes sense, yeah. Every question? Yeah. Would it be possible for the transactions to, you know, in the spirit of transparency and being clear about what the user is about to do with the money, maybe provide the way for depth to, I don't know, either, like, least on the why this is happening, but I think that the development has to explicitly do it, and it will be necessary for Minimas to add that, another way of, it might be repetitive, but I think it goes along with what you're describing as ethical. Absolutely, I think that's an excellent point, and it's something that we've discovered a lot there. This is, you might have noticed this at the back end, since you're mentioning it, where staffs today are actually providing that contextual information before every Minimas pop up some of them. That's not an ideal experience, of course, you realize that, and you want to use those kinds of contextual, like, we don't know why this is being asked. That's a gap for people, and staffs shouldn't have to necessarily fill that, or fill that in a way which is, like, a bad way of solution. So, we are trying to think of a way in which these commissions are actually possible prizes, so that they can make it as contextual as possible. Yeah. Yeah, what was one case where you found yourself circling with this ethics questionary, one design that might result in a better and a complete business? The thing is, we are, we've been on such projects, we are free, we are free in projects, so we don't make any other, yeah, any use interaction today. So, we have been lucky enough to not follow that, and not, like, been exposed to that kind of potential. But, when choosing, like, what data analytics platform, there have been a lot of discussions with our team, which we didn't participate in collecting any analytics at all, but we ended up deciding, it's very explicitly, like, you have to opt into it, you can always opt out, so, trying to keep that balance, yes. There's actually a list of, a series of media multiple-cells we wrote, and how we managed to do that data collection and have it today, to have it go deeper and there's a lot of different perspectives that can be seen as possible. What kind of opt-in talks do you get to have for the data analytics platform? Yes, we can get to that. What kind of opt-in rate do you get to have for people like to, along the edge? It's 70%? It's obviously 80%. Yeah, it's probably 80%. Yeah, 70, 80%. I said, yes, we've got a long, long hi-hop. The other question is, we have, oh, we are over time, sorry. Oh, do you do any, like, click-tracking and like, like, A.B. testing? Like, how people are actually using your I.O.? I was wondering if you can answer me in a little bit. That's something that, now we are, so we've been collecting data up over the last month, right? So it's been like a couple of months and now we're starting to see 70 patterns, and we are starting to do some A.B. testing but we haven't done any yet. Okay, we'll be very close to that. Yes. Can you, on a plug-in, do A.B. testing, like, you believe in different versions? Yeah. Oh, okay. Now I'm going to plug in the A.B. test. Are you swapping the A.B. testing? Thank you. Just want to be respectful to the next question. Thank you.