 Before we get started I'd like to take a moment to warn you that the second half of my talk contains violence and graphic imageries of natural disasters. So before we begin I'd like to start with a PR video I saw recently. It's a little long so bear with me. Need some sound on this? The new feature they want to be on the cutting edge of the technology that's coming out. So on Snapchat there's like the snap shows, there's Snapchat streaks. In the morning just send the streaks picture and when I get home from school send a streaks picture and now it's like a habit to do when I get on my phone. When I post things on social media there's definitely a period where I'm checking who saw it. How many views you get and what people are liking on my stories and what I should keep posting. There is a lot of pressure to present a version of yourself that's close to perfect. I almost never post a picture that hasn't been like touched up in any way on Instagram. I always check it if it's not getting as many likes as my other pictures and I delete it. I have been watched I think multiple shows I've watched about 10 to 12 episodes in a day. The 15 seconds between each episode definitely makes you feel like you have you have this urgent choice you have to make. Instead of having to wait for an episode to come out every week Netflix as a whole makes it a lot easier to just consume so much media all at once in a row. I'm still trying to decide but it starts playing an episode I'm like oh it won't hurt you find yourself like two hours later and you're still watching it and you have homework to get done. I watch a lot of YouTube probably more than Netflix and the suggested videos help me like subscribe to new people. I will watch videos for like an hour and a half without even having a plan to do that. There's just so much content that's just addicting. This video of testimony by tech addicted teenagers was used from the launch of a new program called the Center for Humane a New Agenda for Technology in San Francisco earlier this year. The firm working on this concept is the Center for Humane Technology. It's co-founded by Tristan Harris a former Google ethicist and Azar Raskin from Mozilla amongst others. In their words, apodiction, social isolation, outrage, misinformation and political polarization is part of one big interconnected system that they call human downgrading which they consider to be a technological existential threat to humanity through the exploitation of our weaknesses. They are members of a growing chorus of tech executives, design practitioners, academics, journalists and even people here in this audience all calling for an industry-wide rehabilitation. One of the ways in which they call for this is through the inclusion of design ethics in our work. Today I will argue that design ethics is not the answer. Instead design ethics is a form of reductionism that allows designers to escape scrutiny of their work as it fails individuals, communities and and people grapple with the realities of broader social, technological and political realities. Before we get into that I'd like to talk just a little bit about how I got here. Thanks to an Apple developer education scholarship during my studies, my first years as a designer was spent during the early app Gold Rush. I was lucky enough to work in teams and as a freelancer and we were consistently working on big products where directives were to design trustworthy frictionless interfaces but my minor background in software engineering helped me understand how insecure these systems are. This was of course compounded by the lack of standardization of technologies such as HTTPS for secure connectivity. At the same time my anxiety was compounded by constant international headlines of data breaches, surveillance and technology enabled abuse. If we were to design trustworthy interfaces I thought then maybe encryption and privacy should be at its core too. In 2014 I prototyped Signal with OpenWhisper Systems through a GitHub collaboration with their then lead iOS developer Fred Jacobs. Eventually after months of working together I traveled to Hawaii in its last weeks to help ship the final version to the App Store. We worked diligently to prototype new design performance security and clear interfaces at a time around iOS 7 when messaging apps were either not encrypted or had deeply flawed interfaces. It's kind of interesting every so often this is a giant this is a shot from House of Cards and seeing that work just pop up as like always like a bit of a holy hell. Shortly afterwards and after almost two years as creative director at the early cryptocurrency startups Coinjar I joined Spyderorg a secure cloud storage company name dropped by Edward Snowden himself. There as chief design officer I led deep collaboration between design and cryptography developing new methods for trust and key collaboration informed by our research and research conducted by Carnegie Mellon University. It is obvious though within hindsight that this wasn't the complete answer although the need for security and privacy are unquestionable that breaches and head the headlines of breaches and abuse continued in spite of the availability of these tools and an entire emerging ecosystem of security related products. In 2017 I joined tactical tech in response to this hoping to understand why these projects haven't completely succeeded. Last year I penned two essays based on my research. The first protocols in governance examines how the role the design of decentralized infrastructure influences the hierarchies of the communities they spawn. For example BitTorrent the decisions made by Bram Cohen in the system eventually led to the creation of what.cd an enormous music underground service. BitTorrent itself offered no communication features so these communities had to organize themselves and at one point what.cd before it shut down had the world's largest music collection. But while BitTorrent allowed a global clandestine music culture to flourish the protocols design also required its participants be identifiable so files could be discovered and distributed. This destroyed thousands of lives as multinational media corporations were able to leverage existing and new legislation to identify and target users not just for downloading content but distributing it leading to financial consequences that were astonishing. This is an example of the focus of my second essay weaponized design a term that describes systems and interfaces that harm users while performing entirely within their design. Unlike dark UX patterns where unethical practice employs cognitive tricks to influence people to act against their own interests weaponized design is insidious and opaque the BitTorrent example is one but here are others on the left Facebook's year in review juxtaposes a dead child and a burning apartment against a festive end of year celebration that's an algorithmic selection on the right amidst a messaging client orders suggest positive responses to deep personal loss. Almost all popular definitions of design ethics consider it as practice based a responsibility of an individual team to assess their motivations on what they choose to build this is born from the individualism of our understanding of technology today. This intent here is considered key and the scope of responsibility extends perhaps to a larger design team or requires that a designer is obligated to call out unethical business decisions. Consent and technology is drawn from inclusive and progressive politics and sets a foundation to advocate for an empowered individual. In building consentful tech the wonderful designers Unali and Dan Tolivia can define consent as freely given reversible or withdrawn at any time informed and transparent. It is a framework for ethical data use policies a collection of personal information readily given without corrosion or tricky and in a way that can be destroyed if necessary. Such ideas of consent are even used in things like the GDPR European Union data protection regulation. In 2017 the average smartphone user in Europe or North America had 80 apps installed on their device each with their own important use cases. Behind me you see a video of an emergent technology where banking creditors can analyze social media networks in order to determine whether an emergent middle class is able to accept credit is eligible for credit. Expecting a user each of these 80 apps has their own important use cases their own living terms of service and interactive broader network such as these. Expecting a user to maintain an agreement with just one is a wholly unknowable system such as a social media network is impossible. Informed consent is not possible in a world saturated with platforms and stakeholders. Oops sorry going backwards not forwards. Consent in the public private square is especially non- negotiable and one directional and often leads to abuse of personal data and weaponized design. Some instances kind of shroud the power structures and are playful in the way that they execute and get consent but others are more serious and immediate. Disabling location services is withdrawing consent but sharing your location with someone you trust while on the date is a key safety technique for people on dates and who are vulnerable. Location services also help people with accessibility or mobility difficulties to opt out is to reduce your quality of life or threaten your physical safety. Design ethics also often frames its criticism from a solutions-based perspective. A problematic behavior is observed identified as caused by technology and then corrected via a solution. The criticism of early facial recognition technology stands as perhaps one of the most troubling examples of popular solutionism in the last five years. Early biometrics loaded with homogeneous data sets were unable to detect or respond to non-white features and popular discourse rightfully seized upon that racism embedded in these technologies and this was a mistake this was a mistake that failed to recognize and and respond to the authoritarian opportunities offered by biometric recognition. As a result these technologies are normalized for example Amazon Go in which you can walk in pay for stuff and buy stuff without a cashier through biometrics. I've got another video here but I work on through it completely. Exploration of blockchain technology called building blocks is implemented in the Zaatari and Azra refugee camps. We authorize the transactions using iris scanning or iris biometric technology. The iris scan actually triggers the private key of each beneficiary. So we use that for cross-checking but also other services like the WFP supermarket. In this case they could go to one of the supermarkets in the camps and redeem their entitlement. You go and buy something when you need it at the time that is suitable. What you see here and this fascinates me is that these two systems are really similar. There's two sociopolitical worlds converging enabled but importantly not caused by technology. The key difference is consent and criticism of the technology itself. It's hard not to imagine Amazon Go expanding into the east coast of the United States into an area that is a future casualty of a climate change event such as a super storm. What would the end result of an Amazon Go store reconfigured to respond to crisis in a similar way to the world food program example here? I think a best example of this of biometric facial recognition technology is the protests in Hong Kong over the last six months. Something that's captivated the world for a long time. What's interesting about this is the dystopian or the cyberpunk aesthetics around this kind of work. The confronting images of anonymity in numbers against the total surveillance adversary. This is the maturity of biometrics in action an authoritarian tool wielded by an existing established power. Ethical facial recognition criticism is a total failure and the only defense left is to physically tear it from the earth. In this video a woman is hit and killed by a self-driving uber. This horrific incident is described as a software malfunction but this and later discussions of self-driving vehicles being unable to recognize people raises uncomfortable questions about machine learning and safety beyond the obvious. Cars are intensely political landscape altering technologies. They reshape every part of the society from their isolating interiors to landscape altering roads to the global supply chain that maintains them. But autonomous cars are different. For the first time cars must adapt to their environment through software decision making. Perhaps the most famous example is the public research into the autonomous trolley problem that is machine led decision making about who to sacrifice an emergency based on their demographics. The question catapulted the ethics of self-driving cars into the public imagination. But to date there is little available research that examines or explains how a car might behave in systemic catastrophe. We're on the precipice of a climate disaster yet little public research exists that it's described how a self-driving car can possibly navigate a situation of data chaos or the world that is an upside down chessboard. Similar to how fly-by-wire technologies and aeroplanes disengage during emergency today it appears that human driven cars offer the best chance of survival in a crisis. As this question looms it is masked by discussions of software bugs in the machine so-called moral compass. We are yet to even begin the discussion of the feasibility of this technology in an era of violent ecological collapse. The death of a cyclist is but a symptom of a larger core problem. In January Spotify's design team launched a practice focus framework for ethical design at an event in London. A blog post described it as this, whether it's because of data breaches the alleged addictiveness of screens or social media platforms getting caught up in political issues trust in tech is at an all-time low. Companies and brands are forced to reconsider strategies and place users at the center of everything they do. In other words companies are forced to design ethically to bake ethical design into everything they do. Spotify's ethics framework reflects the zeitgeist of our time but this is not the criticism leveled at Spotify itself. The service's algorithmic curation is a relentless economic pursuit to drive the cost of music labor to zero and this plays a central role in destabilizing music culture. This accusation has been made in internal industry revenue lawsuits but artists are speaking up too. In a piece for the Guardian collaborator of Holly Herndon, Matt Dryhurst, also an artist an outspoken critic of music platform writes about this. A service such as Spotify explicitly de-prioritizes music provenance, decomposes the album and threatens to displace criticism as a source of music discovery. You could be forgiven for wondering if the elimination of the very institutions that led credibility to the concept of independence is a core design priority. Spotify has a world-class design and engineering team and without them the company would not fail. If good design is possible without resorting to the tactics of a used car salesman and it is then by Spotify's own standards they have practiced ethical design but positioning design ethics as a practice-based framework this liberates the team from the problems that their work enables and it's hard not to be cynical and interpret this as a deflection of deep systemic problems. Early 20th century designers I think got this. This is an example of this. This is the Think Small campaign commissioned by Volkswagen at the end of the 1950s to reinvigorate their company in the ashes of the Third Reich. This car sold to American audiences reinvented not just the car culture in the United States creating a new category but also advertising together altogether. Today design based on philosophies built in the first in the first comm era advocate for frictionless mindless design such as Steve Krug's book Don't Make Me Think. This is a form of cultural collapse and when combined with universities that prioritize quick and easy design degrees and training in order to get started in code and design on platforms what we end up with is a remarkable set of tools for generating new capital and products at this expense of interdisciplinary analysis of things at scale. Design ethics is a reaction to shocking dystopias amplified by technology. In presenting phone addiction, polarization, electioneering and manipulation as caused by technology we elevate its roles at the expense of other systemic factors. Phone addiction in kids can be caused by design and algorithmic editorial content but it could also be a symptom of the precarity of family work and a lack of parental guidance due to juggling time constraints and jobs and precarity. Livestreamers could be monetizing validation and gaming their fans online for likes and views but this is also a symptom of emerging unstable middle-class celebrity. Swatting gamers that's the practice of trolling livestream gamers by sending police to their house based on false reports is a terrifying and outrageous trolling spectacle but perhaps this is the most visceral imagery of social isolation and police militarization. Beyond the issues of YouTube autoplay distracting kids from their homework and boomer parents sharing conspiracy theories for likes on facebook the lived experiences and the struggles of our users are at odds both with how infrastructure is conceived and built and how much importance we as as an industry place on the outcomes themselves. As we end this decade we must begin to look at this with a new understanding that our work is a greater whole a part of a greater whole that amplifies and not always causes tremendous upheaval in our lives. Only then will we be able to respond with true meaning. Our emergent technologies blockchained artificial intelligence mixed reality all hold immense potential but today we risk repeating the mistakes of previous generations but this time in a more precarious global environment with more powerful technologies. Design ethics, no thanks. Thank you very much.