 Okay, buzzers, I hear buzzers, excellent. Okay, hi everybody, I'm Georgia Bullen. I am the Executive Director at Simply Secure. I'm going to talk today about some of our work around designing with tools that are focused on safety and security and designing with that in mind. So Simply Secure, if you haven't heard of us, we are a 501c3 nonprofit in the US and our mission is to support education and capacity building in the intersection of vulnerable populations, design and usability. So we believe everyone deserves technology they can trust and that is something that we can be designing for, designing to protect people's safety and privacy. So how do we do that? Just a quick intro to us, we have three main types of work that we do. One and the biggest one and what I'll talk about most today is we do direct design support and UX support for projects. So that means user research, wireframing, testing, things like that. We also do some of our own independent research and we publish as much of that as open resources and tools available to the community as we can. They're on our knowledge base on our website. And lastly we try to run community events. So in the last year we did a week long residency where people came and worked on their projects with us and other mentors. We've run workshops on human rights center design in AI or human rights center design and technology generally. So things like that. So the challenge, which probably all of you feel, hopefully all of you feel, is privacy and security and safety are critical. Most teams, open source projects, what not lack designing UX capacity. And the challenges that all of you are facing are complex and overwhelming. For the developers honestly as much as the designers a lot of the time and the users, right? And so the goal is to try and solve those problems. So how can we try and design for safety? You're in the design room so hopefully this is something you agree with but focusing on user needs. So we take a human center design approach but we add a level around to thinking about threat modeling and risk, which I'll talk about. And so that sort of comes from two aspects. One is how does the technology or product affect someone's safety? How is it being introduced to help but is it also potentially introducing new challenges? Like if you're collecting data you are introducing risk to your system, to the people who is holding that data of what's happening. Are they expected to use it in ways that might introduce risk in their lives that we don't totally understand? And a lot of what matters there is context. So our design choices can cause security holes, confuse people, create workarounds. If we like over preach or try and push people to do exactly what we wanna do they can get annoyed and tune it out and walk away. And people will do, there's a lot of research that suggests that people will do whatever they can to not have to think about security in depth. So you might be saying how can I start to do anything about this? And so some of you, some of these slides if you came last year with Molly some of these you may have seen before but our secret sauce suggestion is to do UX research and design research. You can do that remotely but talking to people figuring out what their needs are, understanding the context in which they work and getting that feedback and feedback back into the process. Understanding how people, they're mental models for how the tools work. So a lot of times especially the closer you are to the tool that you're building which if how many people in here work on an open source project? Yeah, okay, thanks to all of you. We forget how close we are to it and how well we understand that. And so it's really important to talk to people and understand what of that matches because that helps you understand the difference between what you know about the way it should work and the design from the way people are understanding it in the interface. There's a lot of, there's some folks that sometimes think like oh, the users are wrong and we need to fix it for them or we need to teach them how it should work. I vehemently disagree with that concept. People learn technology in so many different ways and we need to figure out the gaps between how they've gotten there and what they might know and what we're trying to support them with doing with technology and find a way to meet in the middle. And that has to do with onboarding, that has to do with design, UX generally. But one of the great ways to start is to say like what do you think this does? How does it work? Talk me through your process. Watch them use it. Watching people use tools. Well, lots of people have trouble with this because if you sit next to someone watching them use your tool, you wanna tell them how to do it and show them what they're doing and it's really hard to watch people stumble through stuff but it's really valuable to get good feedback about what you thought made sense and totally doesn't to the person. So we suggest actually talking to people, doing those interviews and observing as much as you can. But a big piece of this too is to understand their context. So tools need to work for all of your users in all contexts. We're kind of always, it's a sort of standard practice to design for the best case solution. But the reality is people aren't always, they're best selves when they're trying to use your tools, they might be in a rush, they might be under stress for any variety of reasons. So this is a quote more around like the terms and conditions, except buttons that we see every day on every website. Like I know I should read these terms and conditions but I just really need to do this right now. I'm applying for my benefits, I'm applying for a job, I'm applying for a grant like except. We've all done it, we know other people who have done it. We can't assume that people are gonna take the time and cause it's just not the reality, right? They have other things, other goals that have nothing to do with that experience. We do some projects, we've done some projects working with journalists and so they'll have, this is a specific context that has specific needs. So if you're working, if journalists might be some of your users, they need to know that their sources and their information are safe. So if they're putting data into a system, how are they protected? How is that system protecting the source itself? A lot of them will avoid using any types of tools and systems because they don't trust anything other than what they can keep in their possession or what they understand. Sort of on a very different type of context, this is an early career researcher in academics and talking about harassment and bullying and the threat of retaliation from within the community on a tool. So they're really worried, they really like the fact that this tool asked them to review the code of conduct every time because it helps them remember that the community is intended to be safe and constructive. And so if we're actually creating that friction point, asking people to, did you actually adhere to the code of conduct that actually helps this person feel safer on this system? So user research can help you develop concepts of who your users are, sometimes called personas. It can help you understand the journey that they take through the tools and help you understand where you can provide better controls to give people that transparency and that agency to make changes or to review their data, to review what permissions they've set, things like that. So our big advice to all of most projects is don't just focus on the majority, a lot of times we're saying like, oh, how can we solve the problem for the most people? But if you actually focus on the stress cases, the people who are in the most high risk settings, what works for them is likely to work for most people. And so if you can solve that problem well, you can actually probably support your entire user race well. That didn't, that cut off funny, but, yeah, so a lot of this starts with asking like, who actually are your users? What are their needs? What challenges do they have now? And one thing I'd like to throw in there, and this came up as I've been talking to projects recently is you actually probably have a lot of data. A lot of people don't know where to start, like, okay, I don't know how to contact my users. They might actually already be contacting you every day through support forums, discussion platforms, like chats. And depending on the type of tool you're building, like developers are users too. And they, you can look at the challenges that they have as well. So, yeah, so users need transparency and controls to evaluate the changes they need to make. So we're big on good, starting with good defaults, but making sure that it's things that then people know how to do something with. So this quote comes from an interview we were doing where a user is allowed to change their profile, and I'll show a screenshot of this in a minute. But my pseudonym keeps me safe if I need to change my account name. I know that I can do that in the profile. So starting with that good default allows people to opt in rather than forcing them to opt out. So let's talk about some examples, more specifics. So last year, and if folks here were at at camp, the chaos camp in the summer, you might have seen my colleagues, Molly and Eileen talk about this project. But over the summer, last summer, we worked with NoScript. People familiar with NoScript? Raise your hands nodding. Okay, cool. So if you've used NoScript and you think yourself a pretty good technical person, you've probably still been really frustrated because it's hard. It sort of seems like it's geared towards a really specialized set of knowledge. We've talked to a lot of people and their answer was when I can't tell if it's working, I just turn it off. So it's strict by default. It's meant to be interactive. It's not something you can set and forget. And it's actually sort of intending to be a place to create friction to have you question how this tool is what you want actually on a website to load. So for the challenges, it's got too many confusing choices. There are terms that are contradictory in the interface. And it's hard for users to know what settings will just work and actually protect them. So we worked with Giorgio the maintainer and did lots of prototyping work. We made like high fidelity mock-ups. We interviewed people, interviewing people allowed us to create sort of three core personas. So a super user, a privacy advocate, and someone who's just curious about how they can be protecting themselves better online. We also looked at other ads and script blockers. We analyzed the data that we were seeing from the way no script works. And we kept iterating on prototypes because it was just such a complex system. We actually, I don't have it in this presentation, but if you go watch the talk from camp over the summer, which is linked, and these will be online, but that is linked from that blog post. We actually have models of the flow of the different options and choices. It's very complicated. There are many ways you can actually block a site or a script or use it in one instance or another. But our main takeaway was people who want, we needed to figure out how to make those controls easier for people to navigate on a basic level and then easy to configure for power users. And so those are sort of the two primary audiences that we aimed for. Because it's not that no script is keeping you safe specifically or makes browsing easier or is just working for power users. Like the idea is that no script buys you time to make informed decisions about who you want to trust. So it starts as strict and backs off. So hopefully this is coming soon. I know Giorgio is working on it. But we made it so that you, we actually introduced a new feature which is that you can have settings on a per site basis. So you don't just always have to block a script that might appear on multiple sites. You might need it on some, right? Like on the New York Times, you might need the New York Times's scripts if that's what you want to be doing. So that's sort of a new feature that was introduced by this. You have sort of clear fine tuning hopefully and ways to just turn it off when nothing's working the way you want and move forward at your own risk. So hopefully that will actually be rolled out sometime this year. Another project that I'll just talk about quickly is one that this is in the open science space. That's called Pre-Review. Pre-Review is a platform for crowdsourcing pre-print reviews. I'm happy to talk about that more later if anyone wants to dig in on the science part. But basically it's like early research. So drafts of papers before they are fully accepted and published by a journal. And so the idea of the tool is to create and cultivate more open feedback in science and supporting the development of expertise in open peer review. So it's a totally new process. There are no tools in this space at all yet. And so we actually have been playing with two iterations which are linked up there. Many researchers are still learning how to work in the open at all. I think there's been a long, I don't think openness in science is new but there's a adoption of openness in science is growing in popularity. I'm seeing some nodding from folks who seem familiar with that. Generally no matter who you talk to researchers, main fear is retaliation from others. So if they give a strong critique of something they are worried that people will find some way to attack them either in the feedback processes themselves or through other channels, right? It's a small community and network. People look for jobs, have grad students, that sort of thing. So there's lots of ways to be bad actors in the space. These are just some screenshots of the tool. There's a browser extension for the rapid preview which is what these screenshots are from. It's the outbreak science link. And it lets you read a paper and write reviews which is, it's very early beta. I'll put it that way. This is a tool where anonymity is something we've been playing with a lot. So at the moment this is a screenshot from the profile. Users can actually switch back and forth between their anonymous user and a user ID that's tied to an identity system called Orc ID if you're familiar with it. And so this is something that we've, we're just trying to put some stuff out and get feedback so it's in really early stages. But it's an interesting idea that you can actually swap when you're working in the open or when you wanna work anonymously, you can swap back. And so you have a context of your anonymity status and your actions that can apply in certain contexts which makes it really complicated but it's meant to be something that enables people to interact in this community in ways that they can feel safe. And hopefully gear towards everybody working in the open but allowing them to start from a place where they feel more secure. Some stuff that we're working on right now just in the interest of time but things if you wanna talk to us about it. We are working on improving the usability of Python, the PIP CLI tool. So if you're a Python user and wanna chat with us you can talk to me or Bernard. And yeah, and that's, I highlight that mostly to make the point that developers are users too and we can be thinking about usability in those contexts and there's a lot of security challenges that come up. So like one of the things in that project that we will be looking at is how the dependency resolver works and making sure that people know that the packages they're pulling together are all safe packages that might not have issues. We've been working with, we're a design and the global leaks team to improve the whistle blowing and admin interfaces of that tool. We're working with some funders to make it easier to apply for money. I hope that starts to help all of you if you're people who apply to funding. So we're working with the MOSS program at Mozilla and the Open Technology Fund. And then we're doing some work around disinformation and working with Tor around metrics. So that's some stuff that's coming up. If you are interested in any of those I'm happy to talk about it more after. And as I mentioned we have a knowledge base on our website and I included a link here to like a UX starter pack. So if there's, if you just wanna be able to read through some resources, if you're getting started with user research for the first time or with user research with high risk users for the first time, that's a good link to start with. I'll leave that for a second. And I think I wanted to leave the rest of the time for questions because we have a little over five minutes. Yes, okay. So I realized that that was just a super quick set of how we approach it and what we do. But this is me, I'll leave this link, I'll leave that link up. But questions, cause I covered a lot of things very quickly. Yeah, thank you. Thank you. Question, shoot your hand up and if you can repeat the question for the recording, so. No question? Fold over. Okay, yeah. Yeah. I hope so. We are, we need to do some fundraising to be able to run it again, but we are helping too. And what we've been trying to do is feed sort of the experience that we did in the residency into workshops. So there's a, yeah, along these lines, we run a community Slack. So if, and I realize it's on Slack, but if you're working in the space and want access to folks in the community to ask questions or just find out about interesting opportunities, I will, I should have added the link here, but it's, I have a short link that is, I think it, anyways, I can pull it up. But I can also just add to you on my phone right now, if you want. We've also been trying to do a little bit more organizing around human rights center design, tech and security design. So we have a mailing list for that as well. So I'm happy to add folks to those. If those are things that you wanna know what's coming up, but we're trying to do things at events where people are. So maybe at FOSTA next year, could be cool. But we, there will be something, there'll be stuff at RightsCon. If you go to RightsCon, we're thinking about Allied Media Conference and Internet Freedom Festival or some of the ones that we're looking at this year. So IFF at the moment, we're hopefully, we'll have like two days of kind of like design jam workshops that maybe, if any of you will be at IFF or at least in Valencia around that time. Hopefully that can be like a mini residency type of thing. But yeah, we're hoping to. We've just been trying to do it in formats that we haven't had to fundraise for, hopefully that way. Yeah. Yeah. Yeah. Yeah, so the question is about scale, like how many interviews to get enough information for different persona types. So the terrible answer is it depends. And the other terrible answer is we do what we can based on the resources we have. So in the NoScript case, I think we did a total of about five pre-interviews, just like talking to people understanding if they already used it and what they do and challenges they have. And that we tried to, we did a mix of existing users and people who had never seen it before. And then on like a totally different, we did testing with a handful more. So probably in total, we got feedback from about 10 people, but that was still super helpful. There was pretty consistent alignment in that. So we felt pretty confident about where we were going with it. With some of the work that we're doing around like the funding issues, there we're getting closer to like 20 interviews. And it's partly because there are just so many different groups of people that they're trying to serve. So we try to scale it based on the diversity of focus, does that make sense? But yeah, sort of a... Yeah, remote interviewing and I wish we'd be... Yeah, yeah. We've had a lot of pretty good luck using like things like Jitsie and Zoom for remote sessions or Google Meet, whatever, whatever the video conferencing tool that the people we're talking to are comfortable with. And we try to meet them at those platforms. Sometimes we use whereby just whatever the video conferencing tool is. The challenge there is that, yeah, if anything goes wrong technically, like some things won't work. On the pre-review science project, we were talking to someone and for some reason, there were tons of bugs that happened in Safari and they were a Safari user and we didn't... We couldn't figure out what was going on and it kind of like made the whole interview go a little bit awry. But then we also, that helped us find some bugs that we didn't know about. So we kind of try to roll with it a bit. The other, the problem we've run into the most is if you're using tools, if you're testing tools that have a key login stage, people usually, everybody stores their passwords differently. I could write a whole separate research paper about that just based on interviews we've been doing as people have to log in. And that can be challenging because a lot of people are nervous. They're scared to show you what they do. And so just trying to be supportive around, it's okay, I'm not looking. I don't mind how you do it and just sort of let them not feel bad about how they store their data. That's sort of the main thing I would say where we've run into interesting, people will like turn off the screen share to log into a system and then turn it back on once they feel comfortable and that sort of thing. But yeah, it's been, I think the challenge more has been in people low latency environments. So if they don't have good connectivity and really don't want a screen share or don't want to be on video, the video conversation just helps so much in understanding emotions that are unsaid. Being able to support the person as they talk about something that might be difficult or might relate to whatever is the thing that causes them stress. So those aspects can be challenging, but it's still better than not getting access to those people or only getting access to them in contexts like this, which can be overwhelming in and of their own, right? So yeah, hopefully that helps answer that. Yeah, any other questions? One concise, let's get a nice short question to ask. Okay. Yeah, sure. No, no, no, it totally does. I'll say it totally does. One of the, we're just getting started. Bernardo and I are working on this project together and I mean, one of our big questions has been like, how are we going to be testing the new features? So we have to be set up with test dev environments, which means we kind of have to on board as developers to be able to run through all of that, which could be a project just on its own to like improve onboarding for developers. I have a feeling we will end up giving them lots of feedback on all of these things. But yeah, I mean, I think the other big piece in those types of projects and in any big open source project is there are so many pieces and we name things in what seems right at the time. And that's really hard for like new folks in the community, no matter what, if they're users, designers, developers, right? So trying to figure out, we could probably work on a million things related to this project in the next year, but we only have X amount of time, so how do we make sure we prioritize and like log the right things to come back to later, maybe help with how that process works out. So that's kind of when we mean capacity building, thinking about introducing processes to the teams to be able to get that feedback more consistently. But yeah, I think it'll be lots of interesting challenges. Well, we do have to wear this like developer and designer hat, which I think is fun, but means it's hard to sometimes approach projects like that. Yeah. Hopefully there might be a talk about it next year. Yeah. Hopefully, yeah. Yeah. Okay, so thank you very much. Thank you. Thank you.