 So privacy is not an add-on. You want to design for privacy from the ground up. First off, who's this weirdo? I'm a privacy and security engineer. I've worked at large international companies and small startups and kind of everything in between. I protect user data for a living. That is what I do. Come see me on Twitter, Alicia Clock. So now the important stuff. Let's start with what is user privacy? User privacy involves the right or mandate of personal privacy concerning the storing or perposing provisions through parties and blah, blah, blah, blah, blah. Let's try that again. What is good user privacy? User privacy, sometimes referred to as data protection, but I'm going to use user privacy for the most part in this talk, is when privacy is done right, it makes users happy and it helps companies thrive. So what does that mean for users? My slide will go. There they go. For users, it means choice and control. Users need to have meaningful privacy choices. So that means something other than except all of our invasion of your privacy or don't use our product. That's not a meaningful choice. Tech products, tech, software and services are such an integral part of people's daily lives now. So think LinkedIn, Facebook, Twitter, WhatsApp, all of those, that's how people stay in touch with friends and family. That's how they find jobs. It's not really an option to not use it. So it's not a meaningful choice to say if you don't like our privacy policies, don't use our product. So you need to make sure that users have meaningful privacy choices about what they share, when they share it and how they share it into whom. And so closely related to that is users should have control and that means meaningful control over their privacy setting. So again, not just like one toggle that doesn't really do anything, protect my privacy on, okay, that's not meaningful control. I don't know what that means, I don't know what that does. So users should have meaningful control over what they're sharing, when they're sharing it and with whom. For companies, good user privacy means trust. Users will trust your company's products when you demonstrate that you respect their privacy. It means risk reduction because when you are properly managing user data, there's a lower chance of something bad happening to that user's data under your watch and then you being on the hook for it. Obviously nothing can guarantee that no bad things will happen, but this is DEF CON, one of the things about this is taking sensible steps to protect yourself. And finally, data usage. If you properly manage user data and protect user's privacy correctly, it'll actually support your company's ability to collect and use the data that you need in order to provide your products and features and to develop cool new things. So let's dive into that a little, maybe? Come on, okay. TACON privacy, so when you do privacy as an add-on, just slapping a toggle on at the very end, like oh shit, we're launching an hour, let's put a toggle on here. That doesn't provide meaningful choice or control for users. It can't at that level because you've already built everything, you've already designed how the things are going to work. Slapping a toggle on top isn't going at that late stage. You can't do anything meaningful with that toggle, not without going back and undoing a whole lot of other work. And then for companies, again, when you just treat privacy as an add-on, that last minute toggle, your data's not gonna be well managed. You're going to have that increased risk, you're going to have data going where you don't expect it, it's going to show up in weird places, be uncontrolled and you're not gonna really use it when you don't know where it is, when you don't have good control over it. And it means that users won't trust the company's products. Users can tell when privacy is just an afterthought. Think about all those various toggles and all the social media you use, I'm not going to name names but I'm pretty sure you can all think of one or two or 10 examples where you go in and you see a toggle that's just, I don't wanna share my data. You're like, okay, that doesn't help me. Users know and they won't trust you if they can tell that you're just tacking on privacy because somebody at the last minute told you to. So how do we build in privacy early? There's five stages where you wanna build your privacy in, data collection, data storage, data access, feature design and the controls for the features and then your policies and procedures. So your software development life cycle, your internal policies and your employee education. For data collection, there's one rule. One rule, this is the one thing that I want you to take away from this talk. If you take away one thing, this is it. Only collect the data that you need. I have this arguing with engineers all the time. Only collect the data that you need. Do not test me, I will hunt you down. So the reason that I emphasize this one so strongly is because the more data you collect, the more data you have, the more data you have to manage, the more data you have to protect. Don't do that to yourself. If you don't need it, don't collect it. Limited collection, so again, back to the users understanding, like they know when you're collecting data that you don't need. The exaggerated example would be like the notorious flashlight app that wants like your location and your contacts and all of that, like users know that you don't need that for a flashlight app and maybe the flashlight developer was thinking innocently of something like, well maybe in the future, I can develop a really cool feature office data, but in the present, it just looks like you're trying to scan the user. Don't do that, save yourself the headache. So once you've collected the data, now you're storing it. So when you store your sensitive data, the obvious one is it should be stored separately from your non-sensitive data. That's a given. And then in access-controlled storage systems, so not in a storage system that everybody in the company has access to. And despite everybody in the company thinking they need access to user data, they don't, we'll talk about that in a second. So in access-controlled storage systems. And then the least obvious one is in separate partitions or systems based on usage. So this is where you want to know what data you're collecting and know what you're using it for because a high level example would be your customer support line might need to see users' names, addresses, phone numbers and credit card data in order to provide billing support and your engineers need to access messages or account content in order to test or debug or fix things. Engineers shouldn't have access to credit card data and customer support people shouldn't have access to messages. So look at your high level use cases and partition out your data in a way that an engineer poking around in the data to try and solve a bug isn't accidentally going to come across the credit card numbers that he doesn't need. And vice versa. Now I mentioned we're gonna talk more about data access. So access to sensitive data, probably again some of this is obvious, it should be role-based. So people who need the data to do their jobs are the only ones who have access to it. If they don't need it, tell them they don't need it. And because I wanted it or because I think it might help me is not a reason. Only those who need the data to do their jobs should have access to it. And then again, back to the partitioning thing, they should only have access to the data that they need. So you don't want your data, here's all of my sensitive data, here's all of my non-sensitive data. Everybody with access to sensitive data can access all of the sensitive data regardless of what it is or what they need. Split that out further. Make sure that people can't inadvertently or maliciously access data that they have no need for. Obviously log and monitor, again, that's kind of a given. And then automatically updated based on personnel changes. So if you have somebody who moves from engineering up to management, they probably don't need an engineer's level of access to the data anymore. So make sure that that gets taken away and done automatically. You don't want to have to be every five years, somebody thinks, oh shit, we should maybe review our data access policies and then you find out that this manager's been just casually accessing his ex's account for the last five years. Once you've got your backend system set up, now you need to look at your development process. I'll talk a little bit more about the software development lifecycle in a second, but we're gonna dive more into strictly designing your features and controls. So there's a bunch of questions you wanna ask yourself, starting with how am I going to get users to give me the data? And this shouldn't be a malicious thing, again, only collect the data that you need. So think about how you're going to turn on the feature for users. Auto opt-in is not a thing. Do not use that phrase, I will fight you. Do not maybe get the roll of newspaper and bop you on the nose, that is not a thing. Opt-out is a valid option, but call it opt-out. Again, users know when you're trying to deceive them and saying something like auto opt-in instantly will lose you some user trust. And then it should go without saying, never sneakily turn on a privacy sensitive feature. Again, users will find out and they will not like you for it. So now, how do we get users to turn on this feature if we're using opt-in or we're not sneakily turning things on on them? You need to think about how you're going to explain your feature. And this is where you wanna loop in your publicity team, your marketing team. This is the thing that I wanna do. This is the benefit that it's going to give users because if you're not giving users a benefit, then you probably have a problem. Even if the benefit is just as simple as we're making the product run more smoothly for you, but you need to be able to tell them what they are getting out of giving you this data. You need to sell them on your feature. They need to understand why they're giving you the data. And that goes back to that choice and control. Users need to understand what you're going to use the data for, what they get out of it. You also need to consider your legally required notification. So GRDP mandates, any mandates or companies under or any other laws in the countries that you're operating in. Talk to your lawyers about those, make sure that you have those in. But the primary thing that you wanna do is sell the feature to your users and then secondarily, provide information on how to turn it off. Nobody likes to think about users turning off this really awesome new feature that they've created, but users do have valid reasons for doing that. So once they have revoked consent or don't give it in the first place, you need to think about this. What happens then? Do you stop collecting the data? If not, why not? What do you do with it? How long do you keep data that you've already collected? Is it forever? Is it for a week after they've turned it off? Because that's how you're just, there's not a wrong answer here, but you need to have an answer. Because in some cases, collected data might need to be kept indefinitely because some legal thing or other, again, talk to your lawyers, they'll be able to tell you more about that. But there's not a wrong answer, you just need an answer here. And a little bit more about why users might turn it off because a lot of people don't think about this. And I've talked to engineers who like, when I give them a story about why a user might turn things off, they kind of, I never thought about that. So there was an example, when I was working at Google, Google was linking people's accounts. They were like, oh, we're gonna be really helpful, because you know, everybody's like, they set up this old Gmail from when they were in high school and they set up a different one for their professional life. We're gonna start linking all these accounts and making it really easy for you to manage all your accounts. Well, there was a user who was trans. And they were trans in their personal, they were out in their personal life and they were not out in their work life. Google linked those accounts and suddenly this user was outed to all of their coworkers in professional contacts against their will. So that user should have been given the option to not link their accounts before it was done. And again, this was nothing malicious. Google was genuinely trying to help. They thought that everybody wanted their accounts linked. They forget that not everybody is one single unit of person in need space. People have multiple identities and they deliberately maintain these multiple identities. So think about these kinds of use cases where people might want or need to turn your feature off for their own safety or just even just personal preference because that is also a valid reason. But think about that and make sure that you have a plan for when users turn a feature off. Finally, on the feature design side, this is the other thing that I want you to remember from this. When you're designing a feature, ask yourself, does your feature pass the New York Times headline test? How many of you have heard of that already? Okay, some hands. So the New York Times headline test, how many of you are familiar with how much J. Jonah Jamison hates Spider-Man? You know about that? J. Jonah Jamison hates Spider-Man to the point where everything he will do, every positive article he will twist. He will turn it around and make it about how evil Spider-Man is and how he's a menace and ought to be locked up. So now imagine that your feature is Spider-Man and J. Jonah Jamison is writing an article about it in the New York Times. How well will your feature hold up? Will you be embarrassed? Will it be a black mark on your company's record to have this New York Times article out there? Or will you be able to stand up and probably say, yes, we are collecting this data. This is what we're doing with it. This is what it gives the user. Here's how we're protecting it. Because if you can say that, if you have a really good story and you can explain why users might want to give you this data and then how you're protecting it, how you're respecting their choice and control, then you're good to go. But if you're gonna be embarrassed, if that comes out in the New York Times, maybe you should rethink what you're doing. User trust is really, really, really hard to earn and really, really, really easy to lose. Bad handling can kill the best intention feature. So how many of you remember Beacon? Facebook's Beacon. Not enough hands here. Okay, the Beacon thing was basically, Facebook would put little Beacons on like Amazon Pages and other web stores. And so then when you bought a thing, it would automatically post to your Facebook timeline, hey, you know, so-and-so just bought like this thing. Well, all of a sudden, people buying a surprise birthday presents, engagement rings, you know, maybe adult toys that they didn't really want their mom to know about we're getting out in. So Facebook had a good intention, but they didn't handle it well. They didn't provide choice. They didn't provide control. Good handling, on the other hand, can push a feature above and beyond. How many of you remember when Google now launched a while back? Because one of the things that struck me about the Google Now launch was that on its face, Google Now looks really, really invasive. They basically say, we wanna scrape your email, we wanna scrape everything on your phone, we wanna scrape everything in your Google account, but we're gonna provide you all these really, really, really useful services for it. And we're gonna respect your privacy. Here's all of the controls that you'll have over what you're giving us. You can turn off now at any time. You know, we won't retain your data, so on and so forth. And like, I was seeing on even like the really tinfoil hat subreddits and forums where people are saying, you know what? Like, I normally wouldn't go for anything like this, but this is actually pretty cool. And Google Now is so integrated, it's not even Now anymore, it's just the Google app and it lives on all the Android phones. Like, that's how well it was handled, even though on its surface it's really scary. So let's talk about policies and processes and education. So we went into the like questions you should ask yourself during development, but that's not the only thing you should be doing during the development stage of your software development lifecycle. You want to engage your program managers, your product managers, and your engineers very, very early. You do not, do not want to come in late and say, oh, you're launching in three days? Well, you need to do this, this, this, this, this, and this, and this, and this, and this. They're gonna hate you and they're not gonna, they're not going to do it. Or, you know, it's going to just be bad for everybody. So engage early, as early as you can. You also want to make friends with your lawyers. Every lawyer that I've ever met has been perfectly willing to be the bad guy when you need them, which is really, really helpful. But they also, they're your lawyers. They're your company's lawyers. They want you to succeed. So if you get them in early, they'll be able to say, okay, well, there's a, you know, this law says you can't do it this way, but you can do this other thing over here, and that'll get you the same thing that you're trying to do, and we won't be, you know, doing anything to get in trouble. So make friends with your lawyers. Recruit advocates as well. And advocates are people, there are any part of the company, engineers, PNs, support people, managers, anybody who believes in privacy, because they might not be on your privacy team, but they will advocate for you. They will, you know, speak up in meetings, and they'll be the ones to call out those use cases. Like, you know, what happens if a person in a domestic violence shelter is using this? Like, how do we protect that? So recruit your advocates. Make sure that you support them. Let them know that they're appreciated, that you, you know, you're grateful for the work that they're doing, and that you trust them and to, you know, represent privacy through the company. And then I touched on this a little bit in previous slides, but just in general, you want to engage all your other teams really early too. So we talked about legal. We talked a little bit about publicity in terms of how, you know, how you're going to sell your feature and sell the, like, the data collection. We also want to engage customer support. I've seen a lot of times where a product team will push a new product, and then the customer support line gets flooded with calls and the customer support people are like, we don't know, we got it at the same time you did. We have no idea what's going on. That's not a good experience for your users. They're not going to trust you when that happens. It's like, okay, if you don't, if you're like, what is going on here? If you can't even tell your customer support people how to support me in this creepy new feature. So engage your customer support. Make sure they understand what the feature does. Make sure they can answer the common questions. How do I turn it off? What data does it collect? What happens to my data? So engage them early as well. Once you've developed your product, probably it's going to go to test. Hopefully it's going to go to test and not just directly to prod. Keep privacy in mind. Test users don't give up privacy just because they're test users. The level of privacy may be different. That's what testing is for, but this goes back to making sure your users, test users understand what they're giving up, what data they're giving up and what's going to happen to that data. So have your data retention plan. And again, there's not a wrong answer for a data retention and deletion plan because the answer might be, we can't legally delete it. But you need to know what it is so that when people ask, you can tell them and you can tell them why. And test your privacy controls. Find that person that you know that is all about privacy and they will toggle all the privacy. Find them and give it to them and test it and make sure that it works and passes muster. And just like privacy doesn't stop at test, it doesn't stop at launch either. You can't just chuck product out into the wild and be like, hey, there it is. We're done moving on next project. Doesn't work that way. You need to check in regularly with your product teams. Make sure that they're still respecting the privacy controls that you put in place that they aren't changing things on the backend as new things come up. And then as new things come up on your side, work with the product teams to integrate them. So maybe you've developed a new encryption scheme. Go back to all those product teams and say, hey, we need to upgrade your product to include this new privacy tool on our backend. You want to, a lot of this comes in from internal policies and procedures. Getting all of this through the development lifecycle, having people work with you on this. So you need to make sure you have policies and procedures in place to support all of this. Getting C-suite support, your chiefs of staff is super important. They're probably not going to do a whole lot, unless you're really lucky, they're not going to do a whole lot more than issue a blanket statement. Like, we support privacy at this company, but even that is enough to then get the mid-level managers to start issuing actual policies with teeth. Once you have a policy, you need to make it just the done thing, because you don't want to constantly going back and fighting with people about, this is the policy, but I need to do with this. No, this is the policy, but you don't want that. So make it the done thing. And a really sneaky and easy way to do this is teach your new hires. Especially if you're a growing company and your new hires tend to outnumber your old guard, teach the new hires and just tell them that's how things are done. They don't know any better and they'll just start doing it and eventually it will propagate enough that it does become the done thing. And then finally, okay, we've got all these policies and procedures that's great and now we're getting kind of bogged down. Don't let that happen. Make sure that your procedures are as simple and as integrated as you can make it. Again, you don't want to be the team of no. You don't want to be a gate on teams getting to do cool stuff. So you want to make privacy as integrated and seamless as possible. And this goes back to what I was saying about get other teams in early. Engage with product teams early. Get publicity and legal and customer support early because the earlier you do all of this, the more you are steering instead of saying no. Because if you can steer, if they have two choices and one of them is privacy good and one of them is privacy bad or even privacy neutral, if you can catch them before they've made that choice, then you can just steer them and it isn't an issue. They're just using the correct privacy thing. If you get to them after they've made the choice and they've gone to the privacy bad side, well now you have to undo all of that work, bring them back here and send them out again. So get them as early as you can. And finally, education is super, super important. Teacher employs the value of privacy. This isn't just like have an orientation thing where you have a slide up that says, our company values user privacy. It's important, rah, rah. Help them understand why privacy is important. So again, the things we've talked about here. Privacy makes for better products. Privacy improves user trust. It makes your company stronger by giving you data that you can use and by having customers that respect you. And then also, especially teacher non-privacy engineers, how to engineer for privacy. This relates to it being done thing. You want your engineers to just, the first thing that they think of when they think, I need to deal with PII is, okay, well that means it goes into this system and it's handled in this way. You don't want to have to do it in whatever other way they've learned. You want them to do it your way. So teach them that way. Teach them how to do it right from the very beginning. Because the more they do, the less you have to do. But that does all sound like a lot of work. Like all of this backend stuff and all this education and all these policies and all these development questions. Okay, yes, it's a lot of work, but it's worth it. Good user privacy, it makes users happy. Happy users will buy more products. They will see more ads. Happy users are brand ambassadors. I'm going to pick on Uber here for a second because they're just a really good example right now. They had a lot of negative press around, among other things, their privacy practices. And a lot of people jump, shift, jump, ship to lift. Because people would say, hey, you know, I've tried Lyft and they don't do that. You know, you should come over to Lyft. And similarly, you want to be that company. You want to be the company that people say, come to this company, they respect your privacy. Get those brand ambassadors, teach people, show your users that you are the one that they can trust. And then once you've done that, your own company will thrive. Because you're taking on less risk, which is always a good thing. Your user data is better, more useful to you. It's better managed so that you can do more with it. And then you are earning your users trust. And again, users who trust you are users who use your products, who give you money. So do privacy right, earn your users trust. Questions? Microphone. Yes, please. Yes, please. So number one, I just wanted to say thank you for giving this talk. As an engineer, sometimes you get really going home. I'm driven to build a new feature which leverages all this data. But that doesn't really compare when you have considerations in all human beings of mine. So I think you're probably making a difference. You're already sort of harvesting the minimum amount of data to carry on your value proposition and all the things that we do to launch it. If you cannot work technical data, launch and make a testimony to e-testing, how do you suggest that you inform your users that you need to gather their data stuff? This will go back to very careful messaging and I strongly suggest engaging your marketing team because that's literally their job. In some cases, if your company is just known for doing kinds of A, B testing, Google gets away with a lot because there was that whole thing about they're testing 60 different shades of blue or whatever it was a few years back. So if your company just has a reputation for doing all kinds of A, B testing, then users have the automatic understanding that this is what you're doing. If you don't have that or you need to build it up, then this is where individuals, so if you have two different sets of toggles, you might have two different sets of explanations for it, depending on what they do and just treat it like any other privacy control, really tell them what's going on, what it's being used for. It might be as simple as we're trying to improve our products and therefore this feature because improving your products is a valid use case. Users understand that they want better products, like everybody wants the products that they use to be cooler and do more for them. So it doesn't have to be a long technical explanation about, okay, we're A, B testing this and that and we're going to run the test for this long or whatever, it's just, we're improving our products and features and as a part of that, here's this new feature. Thank you. Mike? I like him, I'd like to thank you for making the talk. It's got some really good points and I'd like to address like the main point that you said that we should take away from it which is don't take more than meaning. So I'd like to use one of the examples that you used in your talk, which is the flashlight app. So right now when you download a flashlight app, you know, there's very little transparency into what they're actually asking for. In fact, you actually on Android, you actually have to scroll down and click on the button, see what they're actually asking for. A lot of times they don't even give you the option when you start the app to opt in or opt out. They just turn it on by default. So with that in mind, where do you, or how do you see a solution that problem actually coming into place, specifically with things like the Android apps or the iOS apps or things like single sign-on with like Facebook credentials or something like that where people are asking for your birthday or whatever with no real indication that we need this birthday to serve you content or whatever. So on demand data requesting, iOS does this already. So instead of presenting, when you download an app, instead of presenting you with a list of things that it wants right off the bat, it waits until you try and do a thing. So like if the app wants access to your camera, it'll wait until you try and do whatever in the app that needs the camera. And then it'll say this app needs permission to access the camera. That puts it in context for the user because they're like, oh, I was trying to do this thing. Clearly that needs the camera. I understand why I need the camera app. All right, I'm gonna grant permission. Or they might change their mind, which is a valid option, but they understand. So this goes back to having that meaningful choice and meaningful control because they understand why they're asking for permission. Android, in previous versions of Android, they do exactly what you say where they just kind of give you all of the things up front and you accept them or you don't and you're just kind of stuck with it. I think Android M is the one that started doing the on-demand permissions model. They still have to, like older apps that haven't been updated, still unfortunately run off the older permissions model, but that is a problem that Google is specifically aware of and are working to fix because that is the worst way, like studies. The worst way to get users to give you permissions is by dumping them all up front because you're asking them, basically it's a reward thing. So the user is trying to take an action, get an app, you're putting a thing in front of them that is blocking their ability to do the thing. So of course they're gonna say, yes, I don't care, whatever yet, give me the thing, I want the reward that I'm trying to get. So that is the worst time. They're not going to pay any attention to it and that is where you get those scam apps that just take everything because users don't see and they take advantage of that. So on-demand permission model where you put the request for permission, request for data in the context of what the user is trying to do because then they can understand what it's for, it makes sense to them, they know why you're asking and it's a logical next step for them to say, I wanna do the thing, I understand why it's asking for this permission, it's so that I can do the thing okay. The mobile model for privacy is really good. Do you foresee or see any hope in the future that modern web browsers will also adopt that model instead of just walking everything, giving you the choice? I think web browsers could ever agree on actual standards and actually implement them. There's many insured versions of this, like every once in a while, you'll get asked such and such website, wants to use your location, block or allow and I can be like, okay, I'm on, for whatever reason, using maps inside Chrome, I don't know. That makes, okay, I guess you can use my location or it's a news site and it wants to know where I'm at so it can give me local news. Okay, that makes sense, I'll allow. So they're trying, I think, but they're hampered by the lack of standards and the like, because you have to get every website on the internet to comply with that as well and we're a really long way from that. Any other questions? All right, thank you.