 Welcome to the second year of Upsack Village at DEF CON. Well, it says pause for cheering in my notes, but that's not going to work in Safe Mode, isn't it? So for those who do not know me, I'm Erez Yalon, the village mayor. I'm glad you could join us today. After years of participating in DEF CON and as an attendee, and then as part of some other villages, last year my dream of creating an Upsack Village came true. Luckily, I had one person supporting my dream from the get-go. If you have not yet met my partner in crime, I encourage you to pop over to Discord and say hi to Liora, our Upsack Village Queen. She may not answer as she is busy putting out fires, but it's worth a shot. Going virtual is full of challenges, but we did get a few things done right. One of those things was to bring Tiffany, as our chief people heard her. You will probably see the impact of her work throughout the CON. In terms of our leadership team, in the role of Duke of Content is Joe Christian. Together with the CFP committee, he has put together a great lineup for you over the next three days. That lineup begins today with our keynote, Maddy Stone. Maddy is a security researcher at Google Project Zero. She likes to figure out how things work, from chips to software, and then break it. Her current focus is navigating the jungle of zero-day exploits. Please join me in welcoming Maddy Stone to the virtual stage with who's secure, who's not, and who makes that choice. Good morning, good afternoon, and thank you for joining me since we're all in different time zones around the globe. I am so excited to be here, and I just want to say thank you to Eris and the whole AppSec Village team, for one, inviting me, and two, for putting on such a great event. And thank you for being here and joining me, as at least I hope some of you are here, since I am recording this, I'll load in my home, and I'm going to try to remember to look at the webcam. So let's get into it. I'm a security researcher on Google Project Zero, where I mainly focus on zero-day vulnerabilities that are exploited in the wild. But in other news, I also know every word to Hamilton, so I hope some of you all picked up that the title of this talk is coming from the song Who Lives, Who Dies, Who Tells Their Stories. So now this year I've wrapped up Black Hat, saying, here at AppSec Village, and I think we're good to go. But getting back to the title of this talk, Who's Secure, Who's Not, and Who Makes That Choice. Spoiler, Who Makes That Choice, It's Us. This talk has really been heavy on my heart a lot, and it really came from back at the NMA, here in the United States, George Floyd was murdered. And that came on the heel of Ahmed Arbery and Breonna Taylor also being murdered. And it's been a real reckoning of the racism that prevails here in the United States, or at least it's a, I hope it's a reckoning. And for me, that's really prompted a lot of educating myself and figuring out what role have I played in racism, but how also, what role will I play in anti-racism and working to change this and address systemic racism across society. But it's also looking at and has brought attention to other marginalizations and inequities across populations, across the globe. And so when looking at this, you know, I was reading books, reading, listening to voice, black voices and educators, donating money, going to protests, figuring out all these different ways to participate. But I think it was a really important one to also look inward of how might I contribute to either helping or harming through job two. Especially with coronavirus, you know, today everything is based on technology. To succeed, to participate in society, you have to be using technology. You know, schools are going online, work is going online, healthcare is going online. It's hard to buy devices nowadays that don't incorporate connections to the internet. So if that connection to the internet and using technology is not safe, secure and private for all, then what type of effect does that have? How are we contributing if we are the ones who make all of those decisions about securing apps and devices? So I really started thinking about what is our role in the inequities to in safe and secure access to technology because we are those decision makers. And as much as I would love there to be, I had to come to terms with the fact there is no neutral. There's only helping or harming. Because even if you're just helping one group of people and you try to believe there's no negative impacts to another, you're still broadening that gap if you're only helping one. And so, and that contributes to more and more inequity. But the thing is also if we have the power to have caused and contributed to inequities then that means we also have the power to address them to create a more safe and secure society for all. And that can be exciting. So through this talk, I'm going to show some examples of where I think we in the security community have not served everyone. But I'm not using those negative examples to shame us or be like, no, no, no. You know, in this example, all these people are bad. No, it's that, you know, really come to terms taking ownership for the decisions we make as a community and an industry because then we can see and address them so we can do better in the future. And we can do it together. So here's our first example and it's topical. You know, come March when lockdowns and shelter and places started and a lot of places around the world, a lot of everything we did shifted to Zoom. And Zoom was listening as everyone was crying for we need end-to-end encryption. You shouldn't be able to, you at Zoom, see the contents of the calls. And so Zoom listened and got all excited and issued, you know, to the press. We will begin providing end-to-end encryption but it will only be for the paid tier for the paying customers only. And so it's only those who can pay and will pay are deserving of that end-to-end encryption protection. But it even went so far as to say the reason why it's only for paying customers is that they want to be able to work with law enforcement to catch bad actors. So that makes it even worse because isn't the meaning then of that is hey, we think that only those who can't afford and won't pay for the paid version of Zoom they're the only ones who could be bad actors and thus should be reported to law enforcement. And thus we're then saying you can be safe, don't deserve to go to jail, you know, on and on if you're going to pay us but otherwise you don't deserve that protection. Thankfully this was reversed thanks to outcry from the community as well as ACL with you but the fact that we got to this point. Now let's go back in time a little further in October 2018 and March 2019 two 737 MAX Boeing airplanes crashed. One was from Lion Air, a budget Indonesian airline company and the other was Ethiopian Air based in Ethiopia. And so there was a lot of press about how the cause of this was the MCAT system. A system put into these new jets that pilots hadn't been trained on in using technology made a lot of decisions for the pilot such as when the nose should go down or up and things like that. But then it came out in this New York Times article that there were two actual extras that airlines could buy for their airplanes and these two extras could have helped prevent the crashes potentially because they would alert the pilots when that MCAT system was acting up and that the sensors maybe shouldn't be trusted. And so those two features were the angle of attack indication and the angle of attack disagree light but Boeing charged extra for them. Another example is single sign-on. So across our community best practices I think we all generally say that SSL is a critical part of securing enterprises. Single sign-on or SSO allows users to sign into many different tools using that same account. And so the reason why this is considered a best practice is then like the company who's buying these different tools and needs their employees to log in only has to track privileges and accesses on one account system and it prevents different password roles, potential repetition if the employees have to come up with all these different accounts and passwords and it also is much easier to handle when an employee leaves you don't have to remember to remove all of these different accesses. You only have to delete it one place. So this website SSO.tax tracks the difference in costs that software as a service companies charge if the company buying their software, their customer wants single sign-on as a part of this. And so it's often two to three times more if you want to use single sign-on but sometimes can be as much as 500% increase in price in the case of your table and you can see the others here. And just think about that for a second. This is something we consider as the information security community. We consider a core thing enterprises should be doing to protect themselves and thereby their users. And we as an industry also complain when companies don't follow our best practices but we're now adding these hurdles for them to be able to follow our best practices. We're saying actually you only should do it if you can pay this extra. Otherwise you don't need this core security feature. Another example where paying got you more privacy or safety or security is that from 2013 to 2016 AT&T which is an internet provider here in the United States they would charge you $30 to $60 more a month if you did not consent to their internet preferences program. So the internet preferences program tracked all of the web browsing you did how long you spent on the website and things like that. And since it was at the ISP level ignored cookie preferences, ignored do not track private browsing, etc. So if you consented to AT&T being able to gather all this data from you you got to pay $30 to $60 less. So at the point of a year that's $720 or at this article says $744 extra per year. I don't know a lot of people who say sure I can pay $744 a year for so-and-so privacy but the ones who can and will and understand that they get to be safe and secure and private. Everyone else deserves to have their data read and tracked. And so finally after you know multiple years of this program ongoing and AT&T was getting more bad press and they decided to cancel the program but it still went on for a while you know from 2013 to 2016. And so this is where I think we all really need to think both individually and as an industry what do our actions say? And do those they match our beliefs around infosec and security and privacy because I know what I hear from others and what I believe is not that oh security is only for some I continue to hear that security and privacy is a requirement and this shouldn't be a monetized feature because while we all value it it's the baseline, it's the foundation you shouldn't have a minimal viable product that doesn't have security and privacy built in it's not that extra that you can add in in version 2 or it's only for premium users. So if we know there is a type of way to keep users safe then they deserve that. If that's our mission and our beliefs and our values though do our actions line up with that because it looks like based on these many anecdotes that we're actually saying only the rich deserve to be secure and I don't think honestly that's what any of us in the community believe what we would go around telling people get up on the stage doing conference talks saying only the rich deserve privacy everyone else should have all of their internet behaviors tracked but is that but do our actions and how we secure products and the products we release and apps we release do they show that. So security, privacy and safety are our minimal viable product everything has to have them built in to be a product to be an up at launch otherwise we don't have anything that is fit to be released and to prod basically and I know this analogy has been beaten over and over and over again but what we need to do is treat the security, privacy and safety in the same way that cars treat seat belts yes it costs more to add a seat belt to a car but that is the minimal accepted level you can't buy a car right now that does not have seat belts in so that's how we should treat security too and so at this point sometimes people say oh we should just all be a political if that straw man argument is coming up then I would just like you to think about the fact that all we're talking about is addressing the inequities that can occur for different groups of people when we're trying to secure products our job is to be security engineers or other types of professionals in the information security community so talking about those impacts I don't think that's political if you do then sure InfoSec is inherently political but these conversations about having a specific group we're trying to protect have been around since the beginning of the industry and the community because we've always had set users that we're hoping to secure further or provide more privacy for and so let's look at some of that and how that's framed that we've always had a user in mind so first we can go back to 2002 when Hattavismo a branch off from the cult of the dead cow launched the 6-4 system so the 6-4 system is a network proxy built to evade censorship and the protocol was named 6-4 to remember the massacre at Tiananmen Square which occurred on June 4th therefore 6-4 and you know one of the main reasons at least that I've read that this was created was to specifically help users who are living in countries that have heavy censorship of the internet like China and Iran and fast forward to it today here's another example of having a very clear user base of who you're trying to protect in mind it's even in the sort of advertisement top of the page a few years ago Google launched the Advanced Protection Program and it's specifically to protect and secure those at risk of targeted attacks so journalists, you know, political campaign leaders, activists, etc and again here's another example from the early days of our industry until now that generally as a community we have come together to fight against DRM and fight for consumers so we're not going through and trying to protect the companies that are building DRM into their devices like a refrigerator, you know, Windows 95 video games John Deere tractors hackers are generally trying to find bypasses for these to protect and fight for those consumers and the customers who have bought or used these products in all of those examples of having a clear user base in mind of who we're trying to protect comes to finally threat modeling threat modeling is a very basic but integral and probably required part of our jobs when we're looking to give advice, suggestions or build new security things into what we're deploying and in the threat model you're trying to decide what are all these threats to the thing I'm trying to protect so to be able to create that threat model you have to know really well the exact group you're trying to protect threat models shouldn't be I'm trying to protect everyone that exists on this globe that would be a terrible threat model because we all are different human beings with different concerns different threats and that's just the way it is so I hope through this talk you're not taking that Maddie is saying our threat models have to be every single human the point is is that first identifying like we do exactly who is the person we have in mind that we're creating these security features for and are thus trying to protect and the key of that is that though once we make that choice of who is protected we also need to acknowledge and take ownership that that is also making a choice of who is not protected because when we're building security and new products and features for certain group of people inherently another is excluded you know so for example Google Advanced Protection that's not bad because not everyone needs all of that protection and some would actually find it very frustrating to use but that is a decision and ownership of okay it's the people who choose to opt in who view themselves as at risk of targeted attacks or fall into one of these certain categories like working on a political campaign but we do need to address this because the ability for us to make those choices is inherently a power and a privilege because others don't get to make that choice necessarily whether or not they're included in these groups that we're trying to protect and when some people are protected and secure and others aren't that's going to continue to grow this is systemic inequality this is the erotic world so let's look at some side effects of security decisions when you have in mind a clear group you're trying to protect but maybe don't think about all the implications for others so first right-aid uses facial recognition in secret across hundreds of its stores so right-aid deployed this you know to protect themselves they said in this article and their reasoning was we need to protect ourselves from threat theft and we also need to protect our staff and our customers from violence so that's their security concerns but if that's who is quote-unquote being protected it's right-aid they're actually sacrificing all of the privacy of the customers in order to have that security for themselves and it's this if you read this article and it even says in this headline this wasn't deployed equally it was predominantly used in low-income and non-white neighborhoods and so it's furthering a gap that oh we're just worried about theft and violence in non-white low-income neighborhoods and so we don't need to worry about it there and thus we're sacrificing these people's privacy but I think if you took a step back and let's say we're security engineers tasked with this problem how do we help protect right-aid from theft how do we help protect them from violence occurring in the stores I think we could come up with a lot of different other solutions that could help protect themselves without sacrificing the customers privacy in the process another example is parent or child protection apps I have a lot of feelings about these apps but I'm not going to get into that now because I'm not a parent but the whole premise is these are built for the customer which is the parent or the adult guardian to install on their child's mobile devices in order to try and keep the child safe but if the parent is the one responsible on the customer it's actually built to address what the parent thinks the kid's threat model is not actually who the user of the device is threat model and so that can have unintended consequences or maybe intended but we'll go with unintended consequences on whoever is using the device and has this application installed on their own device so for example in here it's saying the app gives you the ability to listen to the surroundings of whoever is using this app what? that's turning on the microphone and listening while the folks securing and building this app may be intending that this app is only used in consensual relationships the kid gets the phone the parent talks to the kid of you can have this phone but I'm going to be monitoring these different behaviors for it to be able to use this phone hopefully that's what they're building for that doesn't mean that's the only way this technology will be used what's the difference in the technology at the base level between spyware and stockerware instead you're depending on the users to only use it in a way in which you're building it and as a woman in my 20s I have had friends who have stockerware installed on their phones but it looks and it brands itself as a child protection app and also did the people building this in the security and the threat model of their users also consider the reality that sometimes the adult parent guardian doesn't have a child's best interest at heart you know these are also have been said to be used in child traffic wings and so it compromises and sacrifices for a single use case such as oh I don't love this idea but it's only going to be used by parents who have a kid who consents to have it on their phone because otherwise they wouldn't have the phone if we're building only for that use case but there's nothing making that app only be used in that use case then we're still causing harm and so there's no guarantees that code will be built the way we intend it to be built so headline kind of says it all United States government funded phones come pre-installed with unremovable malware so there is a program or there was in the United States that if you were low income if you didn't have the means to get yourself a phone the US government would provide you with a smart phone based on android you know cheaper phones not the flagships well and behold they come pre-installed with unremovable malware so this is one of those things where I hope that the government wasn't intending to install malware on each of these devices I hope that they were trying to give phone and technology access but because of how cheaper devices have begun to subsidize themselves by installing whatever people will pay them to install we have now harmed the privacy and security of these users who really were already in a tough spot in society and trying to live their lives so Strava even on their website the Strava app says they build for athletes so that's their user base in mind that's who they're going to protect when they're securing this application and the data that is stored that comes from it but when you just think about securing for athletes that's only one part of who someone is so we really need to take into account the intersectionality of all of us as humans because as a female athlete I get very concerned about my location data being tracked by anyone else military personnel want to be athletes want to have the features of the Strava application but I bet it wasn't considered that although the data on the heat maps that are open to the public and on the internet that are seen is aggregated that aggregation meant that you could end up seeing where military bases were abroad when all these different military personnel also athletes were going for their jogs at the end of the day and here's another example of just how far our specific choices in security can go so pixel phones as well as many other devices have a support period where they guarantee you will have security updates for this amount of time in this case this pixel phones have three years from the release date or at least 18 months from the time it's last sold on the google store so you know as a community I think one of the most basic things we say to users is you should only be using devices that get security updates or else you're kind of asking for trouble but we also are shortening sometimes these periods that the updates are available for things like phones and computers in devices and so the first obvious impact that this choice of doing three years of security updates may have is on those again lower income or don't have access to buying phones quickly and regularly or even just communities which sometimes can be elder adults who are not used to and don't realize they need to be updating a phone every three years and they actually prefer the older ones because they know how to use them and it's more intuitive to them so those are some of the obvious impacts to users that we might not be building for or choosing security for but this decisions go that much further of if we're getting new phones every three years or faster that generates a lot of e-waste and so while climate change is such a big thing the more e-waste we generate the more negative impacts that has on our environment and the environment and climate change doesn't impact all of us equally there's already been a lot of research out that generally communities already in tougher positions lower socioeconomic status generally communities of color are the ones that are hit the hardest by the environment and climate change it's those of us predominantly you know in US and Western Europe who are buying and disposing of these phones and so our security decisions end up impacting these people you know on other places who might have nothing to do and have never interacted with these phones or devices in the first place so those were a lot of examples not too negative but you might be saying to me I've said it to myself as I try to work through these things myself I'm just one person I just work on securing a single app with only a thousand users I'm just a manager of a little team I'm not an exact I'm not the director of security I'm just a document writer all those different things and you might say I'm just making little choices they don't have that big of an impact on all the examples you have listed here but the issue becomes that each of our choices become patterns in the norm in the status quo for how we expect products to be developed and released so you might think oh I'm just making this little sacrifice a compromise to get this product out the door for me our app doesn't impact all that many people so it's good but that choice is a precedent or a permission for others to make the same one and make those same compromises or those same sacrifices for a different group of people and unfortunately because these technology is so built into our ability to thrive each of those little choices become systemic inequities and they end up having huge society ramifications for example in general I've been guilty of it myself and I'm working on it too would have to acknowledge I've said yeah we accept that lower priced devices are less secure and that's because of the business model they can't do everything we accept that there is more risk to women for any apps that choose to track location we accept that biometric security technologies aren't as effective on non-white faces but each of these acceptances means that we are creating a system that accepts that not everyone deserves the same level of safety security and privacy and we are compounding the previously and already existing systemic injustices and inequality through this technology so instead of shrinking that gap as technology can we continue to widen it and it's easier and it's easier a lot of the time to believe I'm just making this decision or one little sacrifice for one product and there are so many other products and other people working in this space but we lean on each other and it ends up growing and growing and showing others what's cool and that's how we end up with having these extreme gaps and how people are able to access technology privately and securely so an example of how our choice is for security and technology end up having huge ramifications is that in both the United States and Europe many are now using artificial intelligence algorithms to decide on what punishments the person should receive after breaking the law so these are things like how much or if they should have bail what is their sentencing are they eligible for parole this has now been pushed off to algorithms but we as humans write these algorithms and so these algorithms are saying that have been written by humans are deciding whether or not we believe this person can be rehabilitated to become a contributing member of society and this algorithm is saying whether or not this person will commit a crime but since we're coding all of these algorithms they're going to bring the reflections of the team that's writing them and the reflections of that security team who's looking at it and assessing it because for example I'm a white woman from the US for Canada Tech Company if I'm writing the algorithm I know I'm innocent so I'm probably going to assume that people like me are innocent too and deserve shorter sentences and so if our team all looks like me or only a few groups of people then we're probably going to write our algorithm to contribute to just those to give shorter sentences for the people who are like us and to see others who are harmed and I use another slide of emphasizing the same thing because this impact is just so huge this is life or death having to go to prison or not and it also causes generational trauma that goes across families for years and years and years and so we really make sure that we are looking at our choices and understanding each of the consequences those consequences can be good or those consequences can be bad but our choices create systemic problems another example of a systemic issue is general sex workers are discriminated against and this article from Airbnb is just another example of that so Airbnb developed an algorithm they actually bought it they used it and then bought it to try and determine how trustworthy a potential guest might be okay let's put our Airbnb security engineer hat on as we've done for some other company is our goal is to make sure a host's home and property is safe that's the goal we probably won't keep hosts coming back if there's a high likelihood that they'll be stolen from their house destroyed things like that so we're going to use an algorithm to decide whether or not the guests are trustworthy and thus should be allowed to rent but along the way while building this algorithm we decide that sex workers mean they are untrustworthy it doesn't matter if the sex worker is not using this for sex work they just want to go along with their life it does not matter if we are in a country where sex work is legal we have decided through this algorithm that sex work equals untrustworthy and thus they don't have access to units there can be random cancellations that put them at further harm because they've already arrived in the city or location and thus no longer have lodging but that's the decision we've made in our algorithm and the thing is, is that that continues to allow permission for then other companies to say oh yeah they did it so we can do it too there sex work it's a category we don't trust them we don't allow them and I think we see this we see it over and over again how sex workers are discriminated against and so in this case we're coming back to the e-waste anecdote we discussed earlier regarding security updates and their support periods and so based on this graph we can see that North America or Western Europe Australia and Japan and South Korea are generally the biggest producers of e-waste but they don't keep the e-waste to themselves or ourselves we send it off to other countries and we pay them to accept it and so then those other countries which you know Eastern Africa I mean Western Africa I always forget my East and West and Southeast Asia are two of the places that predominantly receives this and they're also at huge risk for climate change but we're saying hey now y'all have to figure out how to deal with all of these toxic metals and parts that are within all of these devices and you're also going to be the ones hurt by climate change but lastly to hopefully get you on board and one more way of we're at this precipice and we might be a few years past it where everything is becoming reliant on technology we can't say oh people are generally going to be safe and secure because they can choose not to use it they can choose to only buy the expensive ones or else they're fine without any device that's not where we are where we're going you know in the United States there's a lot of talk about moving to mobile driver's license there's been different efforts to move to mobile passports and so it will take extra effort and cause extra money to have to go and get a physical card if that's even allowed in the future even with coronavirus just in the last couple months a lot of restaurants and retailers have decided to stop accepting cash just because of perceived virus transmission and this hurts communities that either don't have access to things or have a lot of reason not to trust that things are safe for them and who have been wholly reliant on cash where again leaving them behind and saying oh yeah you don't need to participate in the economy because we've decided to go cashless and another example is airlines so in this case this is Ryanair and if you don't have an online boarding pass and you need the airline to print it for you at the airport you know how things were done 6-7 years ago and was the norm it will now cost you an additional 20 pounds so if you don't have technology you're further penalized and charged for that so what do we do? I told you I didn't want to just present problems with no solutions and I'm guessing you like me are probably in the spot of Maddie this problem area is so huge how do we even begin to confront it and the first thing I want to do is remind us that we are in this powerful place of privilege and that the choice is up to us to address it if we don't know what else is no one else can fix this because we are the ones who are in charge of information security but this is also an exciting gift even if it might feel like a lot of responsibility because that means we don't have to sit here hopeless we actually have the control personally to be able to start fighting for this new future where we believe that regardless of geolocation race, gender, sexual orientation ability, religion in all the other different aspects that make us beautiful diverse humans if you have safe and secure access to the internet oh what a beautiful world that would be and so let's build it we set the standards that security and privacy is a fundamental right a fundamental foundation requirement nothing goes out that doesn't have security and privacy built in for it all and this ultimately helps us too if you need the selfish argument because if we ensure that security and privacy is set and fundamental right for everyone if we're ever on the hook and on that precipice we know it's guaranteed for us too so I hope you know 5, 10, 15, 50, 20, however many years from now we're going to have that option hopefully to look back and say huh I really helped there I push for and now we're living in this world where everyone has security and privacy or we can look back and say I had a chance to help change it and I didn't so how do we do this one as I've said before security is a requirement not a feature it's the baseline for even having a minimal viable product this is for startups who you know a lot of them currently say we get something out the door and then security is added in v2 nope we don't allow that anymore it's not just for those who pay or can wait it's for everyone next we are already explicit a lot of the time if we're not we should be about the threat models of who we're trying to protect and why we're suggesting the security features that are the privacy constraints we are so let's not only be explicit about who is helped but let's be explicit about who is harmed who could be adversely impacted by the threat model we've created and the reasoning is is not that everyone should be protected and helped in the same way for every product it's that by naming problems by saying oh this group of people would be harmed by this choice we can at least start to brainstorm and come up with solutions and bring our creativity but it's definitely impossible to try and come up with solutions if you've never even named it to be a problem three you might be saying to me there's so many different groups and identities and things that are different how am I supposed to keep all of them in my head and understand all of the impacts to everyone and the cool thing is I don't have to as long as we create teams that represent all of these different views and identities and that team is inclusive such that everyone can speak up and represent those experiences that they're bringing to the table so you don't have to ensure that you know how this is going to affect someone who's LGBT and Vietnam if you have someone who's LGBT then they can also help represent but it's even better if you hire for intersectionality so if you hire that LGBT person from Vietnam then you've definitely gotten their viewpoint across and so for a lot of people in our industry when they say hire for diversity it's people who look like me but the only thing I bring to the table is that's different is generally those views about gender that aren't always represented here but if you hire a black woman or an indigenous woman then she will bring to the table both the effects on race as well as the effects being a woman so it's just a great investment for you to hire for intersectionality for write code and build products for the worst possible case to be used for and I totally spelled right wrong here but we're this far in it so it's going to continue so we may build technologies and secure things for a specific cause that we believe is ethically good and okay and all these things but we don't always know how things are going to be used in the future so from the get go we need to build our products and our security requirements for the worst case imaginable cause again the technology can be used to help or hurt so let's build in our values and ethics from the beginning lastly if you're a little overwhelmed welcome to the club yes this is hard but as Glen and Doyle says we can do hard things we won't be perfect I'm far from perfect you know an example of this is that this talk has been very US centric because that's where my background and so it would be really valuable to cons in the community as a whole to also have similar talks that are coming from different perspectives that I can't and don't know how to accurately represent and so the key thing is is that we don't need to reinvent the wheel we just need to hire and pay the people that are already doing this work and asking us to listen to the effects that this technology can have on their communities and so if we do those things together that can make it a lot less hard cause I don't know how many people are watching this talk but I know that there are probably thousands and thousands attending DEATHCON as a whole so if every single one of us decided ok from here on out I'm going to start speaking up and taking actions to question can we do better in helping more people and not allowing others to be harmed then that becomes the norm cause one of the things that makes this hard is being alone and feeling like you're the only one who's worried about this and challenging that as quo so if we all do this together then the norm is to speak up the norm is to demand the fundamental right to security and privacy we're all in this together and that's how we can hopefully do it and make it less scary and less hard and so finally to close this all up as I've said before there's no neutral I really wish there was because sometimes we're all just tired and exhausted but there's not and so we need to own that and we need to own that we have the power and privilege and all of these users are relying on us so let's use these beautiful and fun work we get to do to fight that security, privacy and safety is a right for all and you know that's the world I want to live in so with that thank you and yeah I'm staring at my webcam now okay bye