 Namaste and good morning. A big welcome to day three of DrupalCon. I'm so excited to welcome you all. It's been an exciting DrupalCon and I'm so happy to be here. I'd like to just kick off today by just introducing myself. I'm Shyamla Rajaram. I'm one of the community elected board members. I would like to share with you a small story, my journey with Drupal. I started Drupal, building a portal for a newspaper organization, architecting and building a solution in Drupal Phi. Moved on to evangelizing Drupal, working with local community camps and the global community in the mobile initiative. Today, I run a company of my own where I have the opportunity to take Drupal to many enterprises. It's been an amazing journey and I feel really proud that today I represent this wonderful community as the community represented elected board member of the Drupal Association. I'd like to also welcome Ryan, who's the community elected representative this year. A big welcome to Ryan. Thanks, Shyamla, and again, welcome to day three. I myself am a product of and believer in the Drupal community. Ever since 2007, whenever I first went to my own DrupalCon and had that experience of finding other like-minded contributors from around the world and got to learn and to experience just so many new things with people from all over. Like I've been hooked and so I'm very appreciative to the association that has been keeping this ball rolling forward and not just like, you know, not just letting it drive itself but really bringing it in a direction that invites new contributors in, helps new people catch the Drupal bug, hopefully not the Drupal flu, and continue their own Drupal careers and Drupal stories as well. And as Shyamla said, we're both part of the Drupal Association as your community elected representatives. About 1,200 of you decided to vote this year, so we invite more of you to participate in the next election as we represent your voice on the board of directors for the association, which is in charge of specifically Drupal.org and DrupalCon to make sure that these things help the Drupal community thrive and keep it healthy. I would like to at this point give a special thanks to all the association staff for all the wonderful work that they've done. I think it's simply amazing that this team works from the day we finish our con to the day that we start and end the next con. I mean, that's like, oh my God, they're just an awesome team. I think they need a big round of applause. Fortunately, like everybody else, they're able to get plenty of sleep when they're at the conference, so they become well rested for the next event, right? We just have a few housekeeping slides to go through before we invite our keynote representing sponsor to the stage. So bear with me as we go through the list. Don't forget there are nice fancy hashtags to use to share your DrupalCon stories, images, pictures, posts, and if you have any questions for our keynote speaker this morning, please use the DC's AINF hashtag and make them very difficult and tricky. That would be very appreciated. For the Wi-Fi, if you've been here for two days and have not managed to connect yet, please find the details on the screen. Try to connect just one device and if you're doing your world of Warcraft here in the keynote presentation, perhaps use your 4G or LTE. We do have a code of conduct for this event that helps everybody understand how to best work together and collaborate and keep the event safe and healthy and welcoming for everybody. Please refer to that if you have any questions and contact the folks at the bottom of the slide if you have any concerns or issues during the event today. As far as lost and found goes, somebody has lost a passport. We have found it. If you believe it is yours, find it at the help desk. There will be 30 questions and perhaps a minimum fee associated with covering it. I can't remember, Megan, how much more did we need to pay for coffee today? Speaking of coffee, if you would like some, after the keynote, please go into the exhibit hall. Get to know some of our great sponsors if you hadn't had a chance yet and enjoy some coffee on the house until about 11 a.m. We do have some schedule things just to be aware of aside from the sessions that take place in the halls upstairs. There is a community speaker stage where we'll have at least one presentation today and there are more continuing community discussions just to address concerns and governance issues and safety issues within the community. Feel free to come and make your voice heard as Whitney Hess has been moderating those to great effect and I think that a lot of people have been helped by that thus far. To find those, you can head up to the second level not to the third level and if you need help finding them, just stop by the help desk and they'll point you in the right direction. As far as schedule changes and reminders are concerned, we've decided apparently the Klingon does not matter. So instead you have 30 extra minutes to hear from Dr. Nikki, Nikki Stevens, about how to best support your friends in the Drupal community so we invite you to attend that instead and then join us back here for the closing session where we'll do the big reveal of where we'll all be going to get lost of rest and relaxation next year. After you attend a session, including the keynotes and any of the other sessions you attend, please take some time to evaluate the speaker. Let them know how they can improve where they may have lost you, what they did well and that's just a great tool for speakers like us to understand how to better connect with the audience in the future opportunities to speak. Everyone is welcome to attend our contribution sprints. They will be continuing through tomorrow. You can find these details here, obviously in the program. Come to the help desk again and just kind of come and get involved. That's how my Drupal story started. I'm sure that's how yours has started. Many of you here as well and I've talked to so many first-time attendees that it would be a shame to not come and learn from all of the Drupal experts who are here while you have the opportunity. There are some social events, including Happy Hour and the fancy Drupal Trivia night sponsored by Palantir. We invite you to come and show off your Drupal Trivia chops, create a fancy, not a fancy, a very clever team name. I think maybe there's a prize for the cleverest and join Shyamala's team and she only wants to get her team though if you're going to help her win. Thanks to the sponsors without their generous participation in this event. It would not be possible at all. Thanks to each of the sponsors, the supporters, the signature partners, the platinum supporting partners and without their contributions, this con wouldn't be possible at all. Now we would like to welcome to the stage our keynote presenting sponsor, Josh Koenig from Pantheon. It's co-founder, head of product and fellow head of hair. Thank you, Ryan, for that wonderful introduction. Thank you, Shyamala, for sharing your story. I'm here to introduce our keynote presenter who I'm very excited to see. My shirt says I make the Internet, which is a slogan that I invented a few years ago and I thought I would tell the back story behind that and how I feel about that as my way of introducing the keynote. So I got involved in Drupal fairly early because I was trying to take over the government through legal means. We were running an election campaign that was ultimately unsuccessful but burned very brightly while it was having its rise and that I think for myself and my colleagues in that campaign was a formative experience. It really changes the way you look at the world forever when you see that it is possible in a very rapid and yet very healthy way to bring people together around a cause. And I don't think anybody who went through that campaign experience went back to their same old life before, forever changed by that crucible. And I had a similarly like, you know, mind exploding experience at the first Drupal community event I went to because I could see once again that people could come together and work in collaboration to create things, you know, very powerfully and very rapidly in a way that was a lot of fun to be a part of. And I remember standing up at the end of that thing in Vancouver and like literally jumping up and down and being like, this is so cool. I'm so happy to be here. And I found my way then to earning a living through Drupal. We started a consultancy chapter three, your Drupal experts in San Francisco. And I first came, I don't think I'm a, there's a lot of co-authorship of this slogan and other people have used it and I give them credit as well. But we first used it at chapter three because we opened a satellite office and in the city that we opened the satellite office there was a regulation that you had to have a plaque because it was kind of like in a historic area of town. There had to be a plaque outside your office. And so we weren't sure what we should put on the plaque and we thought, well, what if we just put, we make the internet on it and just stick that out there in like, you know, brass. And it looks great. It was very cool and everyone liked to come into work and see that every day when they walked in the door. And now professionally, I've taken the lessons learned with my colleagues from chapter three and started another company called Pantheon where we're sort of now a platform to help all of you succeed with your Drupal projects to make yourselves successful, to make your clients successful. And we are still making the internet but now we're kind of like the guy behind the guy in making the internet. And I still believe it's a very virtuous occupation. I've spent, you know, the decade plus since that original sort of, you know, fire being lit in my mind and my heart, evangelizing for the potential of the internet to be a net positive for humanity, to connect to the world and really going on this very deeply held gut belief that this will make us all better and this will help us manage our problems better. And I have to admit that frankly over the past couple of years I've started to feel that belief be tested in many ways and I've seen the medium and the technology that I feel partly responsible for that I thought would help everything work out and everyone be connected, be used in other ways to manipulate and to destroy. And that's, you know, something I wrestle with and I think many of us here wrestle with that as well. And because of that I'm extremely proud to be on stage to introduce our keynote speaker. I discovered Zainab's writing about a year and a half ago and it helped me think about many of these things in a much more mature and complete way and rekindled a lot of the hope that I felt was somewhat in doubt. So without further ado, I'm very excited to see the speech, very excited to read her book, Twitter and tear gas. How could you get a better title than that? Welcome Zainab, please blow all of our minds. Thank you so much for that kind introduction. I'm thrilled to be here. And part of writing a book is a lot of things I wanted to say are in the book. So I want to talk a little bit about some of the things I've been worried about and thinking about over the past year. My book, Twitter and tear gas, it got its title from a Twitter poll. That's how bad I'm at titles. Is about social movements and how they ebb and flow and how we seem to have so many tools empowering us and yet authoritarianism seems to be on the rise all around us. So that's the book I was puzzling about that. And now I'm thinking more about how are the technologies we're building compatible and aiding the slide into authoritarianism and also thinking why aren't social movements that we have able to counter all this since we are indeed empowered. So a little bit about me. My name is Zeynep. It is really Tefekci. Yes, the K and C. The C comes after the K. And I started out as a former programmer. I was a kid really into math and science. I loved physics. I loved math and I thought I am gonna be a physicist when I grow up because that's what a lot of kids think when they're young. And I grew up in Turkey in the shadow of the 1980 military coup. So I grew up in a very censored environment where there's one TV station. We didn't really get to see much on that. And as I was kind of making my way through thinking about my own future as a little kid, you know, imagining myself as a physicist, I found out as it happens to a lot of kids who are into math and science, I found out about the atom bomb and nuclear weapons and nuclear war. And I thought, whoa, this isn't good. You know, I could become a physicist. I could be maybe good at it. But then would I be doing something really morally questionable? Would I have these huge questions to answer? And because of family circumstances, I also needed a job as soon as I could and I was a practical kid that way too. So I thought, instead of physics, let me pick a profession that I enjoy that's got something to do with math and science that doesn't have ethical issues. So I picked computers. So that's, I think I just told everybody that my predictive ability is not that good. But, you know, right now my friends in physics they're in CERN debating who's going to get the Nobel Prize and all of that for the Higgs boson research. And while my computer scientist friends are building, you know, killer robots, manipulating algorithms, all of that, so here we are. But I did pick a topic with a lot of ethical implications. Also, I started working as a programmer pretty early on and one of the first companies I worked for was IBM and it had this amazing global intranet. And back then there wasn't even internet in Turkey, right? So all of a sudden I could just sort of go on a forum and email people around the world and I had this mainframe that had to be something, something to localize a midi machine. It was this multi-platform complex things and I could find the person who had written the original program and they'd be like, oh, here you go. Someone in Japan would answer me. I thought, this is amazing. This is going to change the world. This is great. So I felt really empowered and hopeful with the idea because, again, in Turkey a very censored environment and then the internet came to Turkey and then I came to the United States and I've been studying this and mostly very hopeful. I'm personally pretty, I'm an optimist by personality too but I'm getting more and more worried about this transition we're at that you see in a lot of history of technology where the early technology starts with, you know, the rebels, the pirates, it's great, it's amazing, it's going to change the world. We think all these great things and it has that potential. Very often it has that potential. You know, radio was a two-way thing where people around the world talked early on and then World War I came and it became a tool of war. So I started thinking, where about are we at that inflection point? And increasingly I think we are. I don't think it's too late but I think we have a lot of things that we should worry about. So that's what I'm going to talk about in my slides. One day work. There is never a presentation. You can go to CERN, string theory people, smartest people and a clicker won't work or something, it's the tech. I use a kind of a, let's see. Shall I just, there we go. Okay, thank you so much. This is the fastest fix. It usually takes much longer, thank you. So I want to talk a little bit about how we normalize and socialize ourselves in emergencies because I think it's important because I think the tech world is not panicking enough. And my first experience with how we normalize emergencies comes from a pretty awful experience. This is a 1999 earthquake in Turkey in my hometown, childhood hometown, that I was by coincidence in Istanbul 4 during the earthquake and then hearing about it, feeling it, it was a very strong one. I rushed to the area with some rescue teams. See, around the world when you have an earthquake, rescue teams around the world rush to it because it kind of makes sense, right? No one country can have all the rescue teams you need. Earthquakes are rare events. They're not simultaneous events. So every country has a few of those and then when something happens, people all rush. So, so far so good. Unfortunately, Turkey was really underprepared for this quake. This is sort of the iconic picture of it. And you can see it was very capricious. The building would be standing, the next one would be down. And I rushed back with a team from the U.S. who had just arrived. It was really chaotic because the country wasn't prepared even though it was on a fault line. So we're in a shock and the infrastructure's not there. For example, teams needed transportation because when earthquake teams arrive, they bring their earthquake-specific equipment but they expect transportation, fuel, light to be provided locally because why would you carry all that from around the world? Makes sense. Even transportation wasn't arranged. What happened is like a friend of mine, she just flagged the city bus, regular bus. She said, I got an earthquake team who doesn't have transportation and the bus driver was like, oh, okay. And just told all the passengers, get out, please. Thank you. Took the team in, drove three hours to the area. And like a week later, I saw him sleeping on a little bench, all stubble, his bus still transporting. Like he acted, he made a reason choice in an emergency. I hope he never got punished for it. One of the things we needed was lights. So I spent a lot of time that week acquiring lights and fuel. And by acquiring, I mean break into houses that were still standing. I teamed up with a policeman because he seemed to know how to break into houses. I don't know that story, how that happened, but whatever, we broke into houses and we stole, especially his torture halogen lights, they were great. And we were in a rush because with an earthquake, you have this golden period every hour counts. If you can pull people under the rubble before they perish, that's amazing, that's great. But every hour they're facing, you know, danger, they may die. So it was really interesting to sort of step back and think, whoa, I'm breaking into houses and with a policeman and nobody's batting an eye. Why should they, right? And in fact, if anything, they're just cheering us on and saying, oh, just be quick because they're still, you know, aftershocks. Three days into it, especially with people who hadn't lost immediate family, people are just completely, not completely, they're still in trauma, let me correctly say this. People had become normalized into routines, three days, four days. It was just really eye-opening for me to see. They just go to the rubble and pull a chair and sit down and have a cup of tea. We do all these things, laundry, dishes, kids playing, chatting. That experience got me to look into how do people react in emergencies? How do people react to crises? How do people react to moments where there's danger? And your image of it might be this, people panicking and there's this crisis and let's just sort of try to calm people down. If you actually talk to crisis people, they tell you again and again, we do not panic correctly. We don't panic in time. We panic very late and then all at once. And this is true for acute emergencies when there's a crisis moment, also for, you know, like an earthquake, also for things that are happening slowly over a few years, five years, ten years. People don't panic in time and organize in time. Why am I talking about this to a bunch of technology people? Because I think we are not panicking properly at this historic inflection point that has many scary downsides, but we are not there yet. There's a lot to be done. And this is why I'm giving these talks that are more like, how do we deal with this emergency situation the way it is? We kind of let things get to this point. How do we deal with this? Right. So we're, this is what I don't want us to do, all the call. So the authoritarian slide. Let's talk about this. It happens slow than fast. People normalize and socially reassure each other as it happens. He won't win the nomination. He won't win the election. He won't do what he says. Turkey will be fine. LaPen won't win. EU will stand. You know, we tell each other these things and they don't happen. But they happen. And you can sort of read history and see how that happens. And the tech world is turning from rebellious pirates to compliance CEOs increasingly. It is going to happen. History says it's going to happen. I have a lot of friends in the tech world as a former programmer, as somebody in this world. I trust a lot of them. I trust their values. And they keep coming to me and say, trust me. And I'm like, you shouldn't trust yourself because you're not in charge. There are big, powerful forces that work here. Those companies, they may be run by well-meaning people right now. They will comply. They will be coerced. They will be purchased. They will be made to. World War I came and the Navy was like radio. It's ours. Everybody off. And that was that, right? The idea that the workers in the space tend to be more progressive, more liberatory is some sort of guarantee. History tells us it's illusory. And yet I hear this again and again from people who tell me, I work here. I trust the people. We have internal dialogue, et cetera. I don't want to rest on that. All the signs are here. We talked about them recent elections. We have polarization, elite failure. And tech is involved in every bit of this. I talk about misinformation, polarization, loss of news. But it's also the economy. And we'll talk a little bit about this. The technology economy is lopsided and has created a lot of tensions. A lot of you in this room are on the beneficiary side of this, more or less. But to outside the tech world, it is a very scary moment. What will my kids do for a living is a very scary moment for people. So anything about technology is the current thing. The tech world wants to talk about anything but how technology may be compatible with and aiding this kind of authoritarian slide. Now, everything is multi-causal. So you go to tech people and they say it's polarization. It's this. That is true. Anything that you want to talk about is multi-causal. But this is a big part of it, right? Imagine if early 20th century we talked about cars as an ecology. What will they do? We imagine different kinds of cities. We thought about the social consequences of suburbanization and maybe didn't go that way. We thought about climate change which was talked about even in the 19th century. We talked about other things, the public transportation. What if we hadn't had cars go the path they did? We'd probably be in a much better world for lots of things. So the warnings we have, these are my things. The elite tech that right now a lot of people tell me, I'm here. We're really good. They can't replace us. That's going to change. You're going to have machine learning off the shelf. That's going to be easier and easier and more commodified. Tool makers, ideals do not rule their tools, especially something like computing. And most management history tells us we'll either succumb or be coerced or become compliant. It just happens historically. And the moonshots tell you a lot, right? The tech world is very focused on otherworldly things, colonizing Mars, living forever, uploading. I mean, these are fun things to chat about, maybe 2 a.m. in a dorm room, but they're not really the moonshots. We have complicated problems here. So let's talk about what is this, why am I talking about technology as compatible with authoritarians? Let's talk about what I mean. So the rest of the talk, I'm going to go into these things. The case I'm going to make for how we've gotten to this particular point and what are the features of technology that are compatible and or already aiding our slide into emergent authoritarians around the world? One, surveillance is baked into everything we're doing. It's just baked into everything. Ads, they're kind of, when was the last time you bought something from an ad, not very many times. The ads on the internet are not worth a lot unless you deeply profile someone and or silently manipulate them or nudge them. That means that anything that's ad supported is necessarily driven towards this, and even when they're not driven towards this, they are very, very tempted to sell the data they have because that's where everything is going. We've built an economy in the technology world that is surveillance by nature, and it's going to become worse with sensors and IoT. We're such a surveillance world right now that you don't have, you can like turn off your phone and you can do everything, and there are so many cameras around and very good face recognition. The people you're with will post about you. It is not possible at the moment to escape the surveillance economy of the internet and be a participant in the civic sphere. I work with refugees in North Carolina. Most everything happens on Facebook, right? Or WhatsApp or a few things like that. If I want to be part of that work, I can't avoid the surveillance platforms. I'm not going to get people to sort of communicate with me outside of those. Now the business model of these ad-driven companies is increasingly selling our attention. This is really significant because for most of human history we had, the problem wasn't too much information. The problem was too little information. It's a little like food, right? For most of human history, we didn't have enough to eat, right? So if your great-great-great-great-great-great-grandparents liked eating, knew how to eat and gain weight really well, it worked great for you as a survival skill. But right now we live in a world where there's too much food and not enough sort of moving around. We're sedentary, so we face an issue with how do we deal with this? How do we deal with the crisis of an environment that's completely different from the one that we evolved for so long? We have the same problem now in that we're still the people who do the purchasing. We're still the people who do the voting. The day is 24 hours and getting our attention has become this crucial bottleneck, this gatekeeping thing to so many things. So we don't just have surveillance. We have these structures that are getting better and better at capturing our attention in all sorts of efficacious ways, right? Social media, games, apps, content, politics, anything you want to talk about, getting our attention not just by doing something random but profiling us and manipulating us and understanding us in ways that are asymmetric that we don't understand but they do has become important. And these are, like, I'm just sort of lying out, some of the laying out, some of the dynamics, they're all going to come together. So we have increasingly smart, surveillance persuasion architectures. Architectures aim that persuading us to do something. At the moment, it's clicking on an ad. Ad seems like a waste. We're just clicking on an ad. You know, it's kind of waste of our energy. But increasingly, it is going to be persuading us to support something, to think of something, to imagine something. I'll give you two examples. In a couple years ago, I think 2012, four years ago, I wrote an op-ed about the big data practices of the Obama smart campaign, right? A lot of people said this is one of the sort of smartest campaigns, use digital data, micro-targeting, identified, gave everybody a score, knew who was persuadable, who was mobilizable, a lot of A.B. testing, all the sort of top-of-the-line stuff. And I thought, you know what? It doesn't matter whether you support this candidate or not, these things are dangerous for democracy, not because persuading people is bad, but doing it in an environment of information asymmetry where you got all the stuff about them, and then you can talk to them privately without it being public, right? If I sort of target a Facebook ad just at you, that's just at you, like there's no public counter to it. If you saw it on TV, so does the other side, and maybe they can try to reach you and counter it, but here you can't do it. And I started giving examples. For example, we know that when people are fearful, they tend to vote for authoritarians. When they're scared, they tend to vote for strong men and women. And look at how much, for example, content about terrorism occupies both our social media and our mass media. Now, I'm from Turkey, right? This is a problem, this is a real problem. For the Middle East, this is a horrible problem. There's been so many acts of terrorism, so many mass casualties. In the Western world, it's not even a rounding error to a weekend's traffic fatalities. I'm not saying I'm not horrified, it's horrible. I hate every single incident, it's a crisis, in that every act of terrorism is horrific. But the disproportionate amount of attention it gets is a way that people get fearful. And we know that as people get fearful, a lot of them vote for authoritarians. But that's not true for everyone. Some people get pissed off at being manipulated and being scared longer. So in the past, you kind of had to do it to everyone. So in 2012, I said, what if you could just find the people who could psychologically, personality-wise, be motivated by fear to vote for an authoritarian and just target them silently? So you can't even counter this. So back then, a lot of my friends who worked in the Obama team and other sort of political campaigns said, no, this won't happen, we're just persuading people, this is fine. And I said, look, I'm not talking about your candidate, I'm talking about the long-term health of our democracy if you've got public communication that has become private-tailored potential manipulation. So fast forward to 2006. There's already talk that the Trump campaign data team says they did this, that they silently targeted on Facebook so you didn't see it. But just the people they targeted saw, they targeted young black men, especially in places like Philadelphia and some Haitians in Florida and a few other key districts to scare them about Hillary Clinton specifically. They weren't trying to persuade, they were just trying to demobilize. Now, they may be exaggerating how good they were at this, but that's where things are. So I can't vouch how much they did it, although I have some independent confirmation heard from young black men in Philly. They did get targeted like this. And we have an election that was decided by less than 100,000 votes. How many people were demobilized in ways that we didn't see publicly? What were they told? We don't know. Only Facebook knows, right? It's just four years later and you're already seeing this idea and they also tried, this Trump data team tried to do personality analysis. Just using Facebook likes, you can analyze people's big five personality, openness, extroversion, introversion, all of that. So they already tried to use that. Again, somebody might say, well, they weren't that good at it, but I'm telling you it's getting better and better. This is where things are going. So having smart surveillance persuasion architectures, controlling the whole environment, experimental A.B. testing, social science aware and driven. These are sort of these, they sound like great tools. They are not only great tools. They're also tools for manipulating the public. And since they're so centralized and so non-public, we don't know how much more widespread this is going to get. So I'll give you an example from yesterday that I was renting a lot on Twitter is did you see Amazon has this echo look that it's a little camera that you're supposed to put in your bedroom and it's going to use machine learning stuff to tell you which of your outfits is better. Like, you don't have enough judgment in your life, right? You need to have some machine learning algorithm. Ah! All right, so there's gonna be all these sort of, because it's gonna work on its training data. So I will just await the first scandal about, you know, sort of spices about race, spices about weight and all of that. So that will almost predictable, but there's something else happening. If you upload a picture of yourself to Amazon every day, current machine learning algorithms, but that's usually the best way, can identify the onset of something like depression likely before, months before any clinical sign. People can, if you have a picture of you every day, smile, the subtle things, machine learning algorithm can pick up on this. They can already do this. Just your social media data, which isn't very rich, especially on Twitter, it's just short. I have a friend who's done it. She can predict onset of depression with high probabilities, months before any clinical symptoms. She's thinking postpartum depression intervention, right? So she's thinking great things. I'm thinking about the ad agency copy I read about the advertisers who were openly pondering, how do we best sell makeup to women? And I'm quoting, they said, we know it works best. They've tested it when women feel fat, lonely or depressed. So when they're ready for their beauty intervention. I mean, I love reading trade magazines where people are kind of honest. It's really the best place. So Amazon's going to know when you're feeling somewhat depressed or likely. They're going to know whether you're losing or gaining weight. They're going to know how's your posture. They're going to know a lot of things. They can probably, if it's a video, they can probably analyze their vital signs. They can how much blood flow to your face. You can measure heartbeat. It's kind of amazing with high enough resolution. Where will this data go? What will Amazon do with it? And who else will eventually maybe get access to it? You know, Amazon people will say we're committed to privacy and we'll do this and we'll do that. And I'm like, once you develop a tool, you don't get to control all that will go with it. So how long before these persuasion architectures we're building are used for more than selling us beauty interventions or whatever else they want to sell us? How long before they're also working on our mind, the politics, they're already here? So the algorithmic attention manipulation which is engagement and page view driven has a lot of consequences. If you go on YouTube, see I watch a lot of stuff on YouTube for work. I keep opening new Gmail accounts or going on Incognito because it pollutes my own Gmail recommendations. This is something I've noticed about it and I've talked to lots of people and who noticed the same thing. If you watch vegetarianism or something about vegetarians, YouTube says, would you like to watch something about veganism? Right, not good enough. If you watch Trump, it's like, would you like to watch some white supremacist? If you watch a somewhat, you know, radically but not violent Islamic, you know, somebody who's kind of a dissident in some way, maybe even, you get suggested ICC videos. It's constantly pushing you to the edge of wherever you are and it's doing this algorithmically. So I kept pondering, why is it doing this? Because this isn't YouTube people sitting down and saying, let's push people. You know, if you watch something about the Democrats, you get conspiracy left. You get these suggestions. Why are they doing this? I think this is what's happening. We know from social science research if you're kind of in a polarized moment and if you feel like you took the red pill and your eyes have been opened, you got some deep truth, you go down that rabbit hole. So if I can get you obsessed or somewhat more interested in a more extreme version of what you are sort of me interested in, if I can pull you to the edge, you're probably going to spend a lot of time clicking on video after video. So our algorithmic attention manipulation architectures do two things. They highlight things that polarized because that drives engagement or they highlight things that are really saccharine, syrupy, spirit of like humans soaring kind of stuff that are unrealistic. Also cat videos, but that's fine. No objections. Puppies are fine too. So what you have here is this realization by the algorithms that if we get a little obsessive about something, we go down that rabbit hole and that's good for engagement, page views. Or if we feel really warm and sweet about something, we kind of go, oh, that's good. So we're having this weirdo thing where my Facebook is eating people quarreling about the crazy stuff or really sweet stuff, right? I'm like, is there nothing in the middle? Can we have some sort of minute? Well, there is, of course, but the mundane stuff isn't as engaging. And if this is what's driving my engagement and this what I see, you have the swing and it is not healthy for our public life or personal lives to be on this constant swing. It pulls to the edges, right? Now, so all of this encourages filter bubbles and polarization. Not because we're not already prone to it, all right? The current thing I hear is, well, everything encourages filter because that's kind of humans. Well, yes, it is our tendency. You know what that's a little bit like? We have a sweet tooth for a very good reason. Your ancestors who didn't have, didn't like sugar and salt. I don't know, you probably wouldn't be here, right? It was a very good thing to like sugar and salt where you had to hunt and gather and had no fridges. So we have a tendency to like sugar and salt. That doesn't mean we should serve breakfast, dinner, and lunch composed of only sugar and salt food, right? So we have a tendency and these persuasion architectures and algorithmic attention manipulation are feeding us our sweet tooth and that's not good for us. Currently, we're also at the same time dismantling structures of accountability through our disruptive tech. Right now, I think like 89% of all ad money goes to Facebook, Google. So we got all the ad money going to these algorithmic engagement-driven, page-view-driven, ad-driven, profiling surveillance systems. Now, I use them both, right? I've written about how good they were for many things. So I'm not unaware of all the good things that come out of having this connectedness. But having connectedness be driven by algorithms that prioritize profiling, surveillance, and serving ads is not the only way to get connected. We could have many other ways where we could use our technology to connect to one another in deep ways but yet we are here and all the money's going there and local newspapers to national newspapers are being hollowed out. And this is really crucial, especially at the local level because if you don't have local newspapers, I'm watching this happen all over the country. Local corruption starts going unchecked and then it starts kind of filtering upwards from there. The local corruption gets unchecked and you have corrupt local politicians and then you have state-level corruption and then you have the national thing. And I see all this sort of in the tech world, let's spend $10 million to create a research institute and look, I'm a professor. Research grants sound great to me. But here, let me say if you do money, take that money divided into every local newspapers like how many ever they are and just give it to them. Right now, we don't really need a lot of research. We need funding for the work itself that we know what it is. I mean, again, it really sounds great to do more research for me as a professionally researched person. I'm like, I have little to add. Just the problem is to add money that used to finance in a historic accident, used to finance information that was surrounded by journalistic ethics now feels misinformation on social platforms. I don't mean to say newspapers were perfect. Okay, I spent a lot of time criticizing how horrible they are in so many ways. But they were a crucial part of a liberal democracy. We needed to make them better. Instead, we knocked out whatever was good for them. We also have a lot of sort of technology that's explicitly knocking out structures of accountability. Now, I think it's a very good idea to call a taxi from your smartphone. I have zero problem with the idea of an Uber of sorts. And I think if you want to complain about taxis and what a monopoly they were and good riddance to them, you know, I'm with you. But the problem is the current disruptive model, like Ubers and the rest of them, also the way they're structured is not just creating the convenience. It is trying to escape any kind of accountability and oversight we have over these things. So there are worse things than having to have taxis that are crappy. It is having a system that's escaping the institutional accountability structures we built. For example, you have some duties to your employers. And if you can just make them all contractors and pretend they're not your employers, then you pretend you don't have them. And that's kind of lack of accountability structures. And the way Facebook says, oh, we're not a media company. It's just the users. You constantly push accountability away from you. That is not healthy. This is even true for things like Bitcoin, which is innovative and disruptive and interesting. But you know what? There's a reason that we use the money we use because it's tied to yes, imperfect, yes, not great. You know, I'm a movement person my whole life. I understand the sort of objections to the nation state. But you know what's worse than a nation state? A bunch of people with zero accountability are like, let's just do this. There is, this is lack of accountability is a core problem even if the unaccountable people in the beginning are kind of good people and there are friends and we're like, oh, it's okay. It's in their hands. They'll do good. That is not how history works. So the labor realities of the new economy are not compatible with a middle-class supported democracy. This is this crucial, huge political problem. This is not a problem that can be solved within technology, but this is a crucial problem. Facebook right now employs, what, 10, 20,000 people, maybe a couple thousand are engineers. It's very top-heavy. General Motors at its height when it was the dominant company employed maybe half a million people in pretty good jobs and with a supply chain with a lot of good jobs. Right now you have a couple thousand engineers at a company like Facebook making really good money, really smart people, a lot of good people and the rest is basically minimum wage. Think of Amazon, right? It's a bunch of great engineers. They can create something like Echo Look to give you fashion judgment and warehouse jobs. This is not an economy structure that can support a mass-based democracy. It just won't. People will vote for whoever's going to burn things down or promise to burn things down on their behalf. So the skill labor. The other thing is this is distributing the labor around the world so you have a race to the bottom rather than race to the up for many, many jobs, right? The sort of outsourcing. I'm for jobs being distributed in some sense, right? We don't want them all here. We want all the rest of the world but it has to be lifting everybody up more. So we also have increasing centralization as part of this. One of it is network effects and the other is data hunger. I'm on Facebook because so much of my friends and family are on Facebook. It's also a great tool in many ways. It's, you know, the product keeps getting better. The alternatives just aren't feasible for me because I can't get my friends to email me. I can't. Just doesn't work. I've tried. They can use Facebook. They can use Messenger. There are a lot of parts of the world where Facebook is the de facto communication mechanism and Messenger. And also for something like Google to work so well, it needs all that data. That means once you got all the data you're in this really dominant place where the people without the data can't compete with you. Their algorithms won't work well. Their profiling won't work well. So we have increasing and also security. I work on movement stuff a lot as I said and I work with a lot of people in precarious situations and repressive regimes. I tell them if your threat model isn't the US government or Google, use Gmail. I'm like don't use anything that doesn't have a 5,200 person security team because it's not safe. Long story why I don't have to explain to this crowd the way internet started TCPIP was built as a trusted network and all of that but that is also driving centralization because you don't feel secure. How many major platforms have yet not been hacked? I can't with one hand maybe that have not been hacked. So you gravitate towards them. So this is also driving why you get Facebook, Google, Amazon, eBay, a couple more. They're just not easy to knock down which means when they do what they do you don't have a means to use the market to punish them because there isn't meaningful consumer choice. There's asymmetry in information. They know so much about us. How many of us have any clue how much that is about us and how it's being used? We have no access to it. When Facebook CEO Mark Zuckerberg he bought a house, he bought the houses around him. Why? He didn't, he wanted privacy. I don't blame him. I don't begrudge him his privacy, right? But we don't have it. We have this sort of complete asymmetry in what we get to see versus what these centralized platforms and increasingly governments get to see about us. It's, this asymmetry is deeply disempowering. Machine intelligence deployed against us. So algorithms, computer programs, the algorithms got this second meaning now. It just means complex computer programs. So here I don't know, I can just say, especially machine learning programs, right? What's happening is that we've got this really interesting powerful tool that can chew through all that data and do some linear algebra and some regression and spit out pretty powerful classifications. Who should we hire? Give it some training data, divide it into people who are high performers, low performers, churn, churn, churn, train, and then you give it a new batch. It says hire those, don't hire those. The problem is you don't understand what it's doing. It's powerful, but it's kind of an alien intelligence. We tend to think of it as a smart human. It's not, okay? It's a completely different intelligence type. It's kind of alien and it's also powerful. So for all we know, it's churning through that social media data and figuring out who's likely to be clinically depressed in the next six months, probabilistically. You have no idea if it's doing that or not or what else it's picking up on. We're putting an alien intelligence we don't fully understand that has pretty good predictive probabilities in charge of decision-making in a lot of gatekeeping situations without understanding what on earth they're doing. And already in the tech world, you talk to sort of people at high-tech companies and they're like, no, we don't understand our algorithms or kind of machine learning algorithms. We're trying to dive into it a little bit. It's now spreading to the ordinary corporate world. They don't understand it at all and they don't even care. It's cheap, it works. Let's use it to classify. But what is it picking up on? What is the decision-making? Do we really want a world in which a hiring algorithm we've got everybody prone to clinical depression with some good probabilistic ability? 90% of them. It's a problem if it's right. It's a problem when it's wrong. And these things can infer non-disclosed patterns. Even if you never told Facebook your sexual orientation, I mean, it's 90% plus probability it can guess it. It can guess your race if you never told it. It can guess, well, forget the pictures. It can guess just from your thing. Personality types. There's so many things that can be computationally inferred about you, predictively powerful, that you've never disclosed. And people can't think like this. I didn't disclose it, but it can be inferred about me. And bias-laundering. A lot of the training data has human biases built into it and now it goes through the machine learning algorithm and you're like, oh, the machine did it. So this is the opacity and the error pattern that we don't understand. How are they going to fail? This isn't a human intelligence. It's going to fail in non-human ways. These are all our big challenges with the biases. Now, why is this compatible with authoritarianism? Think Orwell, ah, think Huxley, not Orwell. This is my subconscious error. Orwell thought of how about this totalitarian state where they dragged you and sort of tortured you. That is not really the modern authoritarianism. Modern authoritarianism increasingly going to be about nudging you in this balance between fear and complacency and manipulation, right? If they can profile each and one of you, understand your desires. What are you vulnerable about? What do you like? How do we keep you quiet? How do we keep you moving politically? The more they understand it one by one, the more they can manipulate this. Because again, it's asymmetric. It's an immersive environment. They control it. Effects of surveillance in a lot of cases, and I know this from working on movements for so long, is that once you become really aware of surveillance, it's not necessarily this sudden thing where what shuts up, it's self-censorship. People stop posting stuff. They stop talking politics. They stop even like thinking to themselves because you're not going to really feel comfortable sharing it. Surveillance coupled with repression is a very effective tool for spiral of silence and control manipulation and persuasion in the tools of, in the hands of authoritarianism. Now, I want to give like just one example from history for I conclude, films, movies. You know, I like watching a good movie. I like documentaries. I like the craft. It's really interesting. But early 20th century filmmaking craft was developed by people who ended up in the service of fascism, violent and virulent racism. And there are two very striking examples. One of them is the birth of a nation. This is early 20th century. It's this horrible racist, just murderous movie. It is the reason the KKK got restarted. It spread like wildfire. It went viral. It also used the tools of the craft very well. The way, you know, we think of A.B. testing, dynamic architecture, experimental stuff, social science, engagement, all the things we think of our tools of our craft. The craft of moviemaking, it was a leap forward. And it ended up with lighting the fire that recreated the KKK. Of course, it was on a bedrock of racism. But you have to light those fires for them to go someplace. The second one is a scene from Triumph of the Will, which was shot by Lenny Riefenstahl. There's a great documentary I recommend to everybody in tech. The wonderful, horrible life of Lenny Riefenstahl. She lived to be 99. She was this filmmaker, artist, actress, gorgeous woman. And she's really, she developed the craft of filmmaking. She's really into it. So when Hitler asked her, when the Nazis asked her to film that, she was like, great, I can practice my craft. And you got the Triumph of the Will and the Propaganda Regime. That was so consequential and efficacious in helping the rise of the Third Reich. And four years after, she's like, I was just practicing my craft. The craft is not a tool that you can control. It's not something that's neutral. So why am I so depressed, depressive this morning? I'm not, because I don't think we're there. I just see this inflection point and I feel like we can, we don't have to do things this way. It's going that way. We don't have to do things this way. And one of the things that I really feel hopeful about is that there is this big divergence at the moment between where the world is going and what in general the tech community thinks and the technology workers are still a very privileged, unique group in that they can, most of them can walk out of a job and walk into another one. I talk to people at very large tech companies. They say all sorts of things. They have no fear. I'm like, aren't you afraid? They're like, no, I'll just walk into another job. They make a lot of money. So there's demand. They're still huge demand. And what I'm saying is this will change. 10, 20 years, 15 years. I don't know when, like with other technologies, it will stop requiring less and less specialization. The de-skilling of the work will move upward the chain, get more and more minority. But right now there's a large number of technology people who are the ones creating these tools, who are the ones that have the ability for the most part to walk into a job that pays a reasonable wage. And these companies can't do this without them. So this is what I'm thinking. I'm not going to end up with sort of an answer, but how do we do this so that this great tool, the one that I have this hope for, when I discovered it in Turkey, how do we do this so that we take the initiative and not just create another algorithm to get people to click on more ads, another surveillance architecture, another thing that will be used potentially in the future by authoritarians? I just want to think, how do we become like that bus driver? The one that said, oh wait, you know, an earthquake team needs transportation? All right, everybody get out? Right, I want that initiative. And I think we can do it because once again, we have leverage as people in this sector. We have leverage that a lot of other people don't have. So this is where I'm going to end. And I'm easy to find. Thank you for listening. And I just hope to have this conversation with more and more people. Thank you. Optimist, I promise you. I, if I didn't have a lot of hope, I just wouldn't bother with all of this. I don't, I just, I'm projecting into the future and partly I'm from Turkey. So you know the person who's seen a movie, a horror movie and that's yelling, basement, don't go into the basement. You with the red shirt. Do not go into the basement. That's how I feel like I'm also from North Carolina. So I kind of feel like let's not go into that basement. So I think there's hope and time. Wow. So I have, I took so many notes, my pen is dry now, but I will focus on a couple of questions here. You mentioned a lot of things in a light that is harmful to many people. And I think many of us in the audience and certainly myself have heard a lot of these technologies described as revolutions and marketing technology as great new ways to unlock customer value. And these, these paths forward to evolve your business, right? That, you know, quote unquote digital transformation. How do you reconcile as a technology worker being told one narrative and then living this other reality? Well, see the thing is when you're building something, right? You're just thinking, oh, I'm building value, right? I'm just building value. I'm helping somebody, some customer value, you know, finding consumers. But every time you collect data about someone, there are these ethical question. What's the youth policy? How are you keeping it secure? Do you need to retain it? How much retention are you going to do? So there are all these ethical questions that I feel like we're skipping over. We're kind of not even thinking about it. We're just saying let's do it this way. And what I want to encourage people to think is every time you measure something about someone and write it down or record it and they don't even know that you did it, you create this situation that's once again compatible with a lot of other things. So, yes, you know, you may be also creating for the consumer. So that may well be true, right? Those things can both be true but you're also building the infrastructure for all these other things. There are a lot of folks here who are individual developers, individual engineers. They might be asking themselves what role do I play in this, right? I don't want to say I'm just taking orders but I'm just filing an issue. I'm just filling this task. You know, I don't get a say in how my tool gets used. I just build it. We do get a say in how our tools get used though, right? I mean this is the thing and I think we got to do a couple of things. I think we got to assert more interest in how our tools get used. So if you're working in a large company there's a lot of ways to assert but even if you're working like as an individual person you can think of how is this tool going to be used and maybe more importantly, can we build alternative tools, right? Can we build systems that create the convenience and value that we want but don't have all the overhead that comes with it? So for, I mean, I've been sort of asking for this. It's not going to happen apparently. If you look at even Facebook, right? One of the largest companies in the world, what is it, $400 billion or something like that? If you'll read its filings, it's making like $10, $20 an hour, I'm sorry, a year per person, right? So all this data surveillance and persuasion architecture and manipulation and misinformation and fake news and all the stuff that we deal with and all its harmful effects, give me a subscription version of this, right? Make me the customer to have your connected world. You know, even if it meant you were worth, I don't know, $10 billion instead of $400 billion, I think people can survive on that. There are all these things that are these alternative paths that we haven't taken, we haven't explored. And I think everyone from sort of the individual user to someone who's building a tool to people working in big tech companies can say, is really our imagination going to be so limited that we're going to build all the stuff that is so compatible with authoritarianism just to get people to click on a shoe ad? I mean, we're selling ourselves for really cheap, right? We're building authoritarianism for such a cheap, cheap thing. That's what I'm thinking that we, maybe we can build alternatives. One tension that I struggled with with this talk is this notion of authoritarianism and maybe a comical reduction would be to think of them as this big bad, right? You know, in Joss Whedon terms, there's this big potentially malevolent force that wants to manipulate us. But it's hard to reconcile that with some of the public imagery of, let's say, the buffoonery that surrounds some of the authoritarian forces in the global stage. So these people, you know, can barely finish a speech sometimes, but we're also trying to think of them as ones who would actively manipulate us. How do we reconcile these possibilities? If you read a, I mean, I don't want to say we're in the Weimer Republic and we got a new Hitler. We don't, okay? So it's kind of clear. I don't want to trip on Godwin's law here. But if you read like Hitler's biographies, he was a buffoon. He was an idiot in so many ways, but it was very good at one thing and one thing, and that was firing up people in speeches. If you read his biography, he's a failed painter. He's a failed everything. And then he gets in a beer hall and starts spitting out conspiracy craziness and holds people's attention. And hubris is a really good book on this. And then all of a sudden, there's one thing he's good at. He's really good at it. So I'm not going to say our authoritarian's are these super smart clever things that are going to be these omnipotent omniscient things that do this. But they could, at the same time, be very good at tapping into a fear, a sense of instability, a lot of complicated things like the legacy of racism, a lot of things that kind of exist, kind of converging. They could tap into that very effectively. And while you would be saying, and I have said this so many times in the past few years, how could people believe this? They can, right? It's complicated, and we can both sympathize and not sympathize, and we can do whatever we want to, but it is quite possible for authoritarianism to be ruled by not the smartest people, but very often they will find a linear refund style to hire, right? So Hitler may not have been very smart about a lot of things, but he was good at the beer hall speech, and that regime did hire one of the most talented filmmakers of the day. So I'm like, we are the talented people of our day. How do we not go work for them? Because some people will work for them directly, and some people will work for them indirectly in that the tool you built will then be used to further authoritarianism. And we saw this, like, Facebook's algorithm is so happily prone to misinformation. It monetizes misinformation. You can go viral with something crazy. The UX is flat, so you can't tell, you know, Denver Guardian, the fake one from a repeatable newspaper. If it starts an argument in your mentions, it's engagement. Facebook's gonna push it to more people, right? So you built a tool that's supposed to connect people. Well, it did connect people, but around this, and it was very compatible with it. So you might, you know, it doesn't really matter if you thought the tool was gonna be used for something. If it has these features, this is where it's gonna go. And people have warned them about this, for example, before the election, they're going, ah, ah, ah, ah, ah, ah, ah, ah, ah, ah, ah. But here we are. And I'm not saying, right, everything's multi-causal. It's in a close election. It wasn't one thing, but I follow social media pretty closely on this thing. Misinformation was, it had a spike. It had a huge spike in this election, and it was partly spammers just monetizing Facebook's algorithm, Google's ad networks, and it was partly this polarized environment feeding into it, and it was partly this dark targeting ads. It was all of this combined, and technology was at the center of enabling and furthering, enabling new things, furthering existing difficult things. So, and this is my last question. Give me a second to formulate this here. As technology workers who have quite a bit of privilege already and whose tools are being potentially used to strike at those who are less privileged than us, what then, oh man, how do I even put this? For those who are being targeted, for those who are manipulated by information easily, for those who are less savvy about technology than we are, what is our responsibility to them as technology workers given that we have access to this ecosystem that they may not? Right. So, this is where I get into my why diversity is important spiel, and I don't mean diversity of token diversity in that you just bring more Stanford CS women into the room because, I mean, I identify as a geek person. I love my tribe, but we kind of have our own sort of tendencies and specialties. What we really need is more people in the room where design is being done, more designers, and just people we talk to in all sorts of, from all levels of society, that can help us think of what's happening because it's not always easy to imagine how something's going to play out, but very often the people from those communities, they're going to be the canaries in the mind because you can sit with them either if we have more of them making the technology, and until then, we're just listening to more of them. I'll give you an example from Facebook. I give Facebook examples because it's more familiar with people. It's not that Facebook's the only problem, not at all. But a couple years ago, do you remember when Facebook had a year-end thing where it said it's been a great year and algorithmically picked a picture and put a party theme around it? So, it came to people's attention, especially Facebook's attention, because someone from our community, a CSS creator, a technologist, very well-loved, had a very unfortunate, horrific tragedy that year. He lost his six-year-old daughter to cancer, and a lot of us knew about it because he was blogging and he's a very blogger already, good writer, technologist, and everybody's heartbroken. So what does Facebook's algorithm do at the end of the year? Picks up a picture of his daughter, puts a party theme around it, says it's been a great year. Right during the holidays, you know, it's been six months of struggling. And he wrote this heartbroken post about algorithmic cruelty, and he was very kind, in fact, what he wrote. So how could this happen? Well, head of Facebook product at that point was a 26-year-old Stanford graduate whose stock options might have been great. He's surrounded by other Stanford graduates so stock options are great. They work at Facebook. They live in San Francisco. It's been a great year, right? The kind of failure of imagination that it might not have been a great year for a billion and a half people? This is so elementary. I mean, you do not need a Stanford CS degree. I am telling you, literally walk out, pull three people, and ask them, do you think it's been a great year for a billion and a half people? You will get the correct answer. And this is what I mean. We can be so narrow. Some of the smartest people in the world can make such obvious, horrible mistakes, right? And this should call for humility. We need to be talking not just among ourselves. We need to broaden who gets to be a technologist. And there are a lot of issues about doing it as fast as we can. And even if we can't do it right now, it will take some time. I am literally saying, pull people off the street and say, is this a good idea? And you will get insights that we, as a geeky community, a well-paid community, more male than female, less some minorities, I mean, we won't have those life experiences to envision all the potentials. And I think that's a really healthy way to go is to try to think through. As I said, a lot of times I think through things partly because I'm from Turkey and that kind of informs my understanding. And I think those intersections are really important and that's why broadening, really bringing more life experiences into how we think about technology would help us do it better and would also help avoid such horrible mistakes that we keep seeing again and again. And that's kind of my little pitch about that. Thank you. Thank you. Thank you so much. Thank you.