 So I have the privilege and distinction of speaking to you at the end of the day. So the first thing I'd like to do if you're okay with it, if we could stand, if you're able, and just take three really deep breaths to get your blood flowing to your brain. Stretch if you would like to. We're in the home stretch. Cool. And sit when you're ready. If you wanna keep stretching, that's fine. I'll start talking. So today I'm gonna talk about the role of consistency in ethical technical production. And I don't know if you've noticed, but there's been a lot of discussion about the role of ethics in technology within industry, who's sort of freaking out about the media coverage of how their tools are causing harm, but also within civil society organizations who have missions and values, but are struggling to translate those missions and values into the technical choices that they make. It's not a straightforward exercise to go from we believe X about who we're serving and how we wanna serve them to this is the decision we're gonna make about persistent cookies on a website that we have facing constituents. So I'm gonna talk a little bit about consistency as a particular aspect of thinking about ethics in technical production. And just for some context, so over the past five years, I've been working with civil society organizations who are really doing their best. They're essentially saying, when we make technical choices, when we use data, when we engage with communities and collect data about them, we wanna do that in ways that are responsible that don't undermine our advocacy goals that are connecting with those communities and not producing things about them in gross ways that aren't respecting them. But the challenges are real. So watching organizations struggle to figure out when we build this tool, when we bring in an outside firm to help us do something, how do we constrain technology to do what we want it to do and not sort of move outside of what we're comfortable with. And in the past year, I've started doing more work with companies which has been a really surprising and interesting discovery process. The one thing I've learned that I feel like if you take away one thing is that they're not monolithic. So some companies you go in and you talk to people and you think I'm really upset that these people are building things that I'm supposed to use. And other companies you go in and you see people who genuinely go to work every day trying to make products that mean something to people that are useful for people that are respecting of their time and energy and rights. So but it's been an interesting process. And I think even within companies that have unlimited resources. So you look at civil society, got the best intentions, best practices are difficult to define, companies maybe not the best intentions, tons and tons of resources and they still can't figure it out. So what's the sort of, what are the issues at play? I'm gonna talk about particularly the role of consistency. And I wanna start with inconsistency so we can sort of talk a little bit about what that looks like before we talk about consistency. And I'm gonna use two big examples just in the past year that are companies that were inconsistent across time. So with two, the exact same decision they were confronted with, they made two different decisions back to back. So last year, Twitter decides not to ban info wars. We could debate at length about whether that was a good decision or a bad decision. I personally think it was a terrible decision. But what really sort of riled us I think differently than just the quality of the decision was when the very next year they say these are dangerous individuals and they also group Lewis Farrakhan in with info wars almost to sort of cluster a group so they could then pretend like they hadn't said the exact opposite the year before. And so it wasn't, I think, looking at inconsistency, it's not that the choice that they came to was wrong or right. It's the fact that they didn't have values underpinning that choice so they had no sort of sense of orientation of how to make that decision which made it easy for them to change their mind less than a year later. And another example of this, I'm not sure how many of you all are familiar with DeepMind, but it's an alphabet company that's based in London and they do a lot of machine learning work and they were working a lot with the National Health Service in the UK and said at the outset to sort of quell any concerns then being a part of Alphabet that they would never connect the health data that they were collecting within the NHS with any Google user services or products. They came out and they said it really forcefully. And literally two years later, I mean I'll let you guess what happened. So now DeepMind is handing over DeepMind Health to Google. And you could argue that maybe conditions have changed and they argue that they want to be able to provide these services globally and that the infrastructure of DeepMind isn't what the infrastructure of Google is and that Google is gonna be a better channel for them to change the world. Which at face value maybe there's some merit to that argument, but the fact that they backtracked so publicly and the decision was so inconsistent, one it showed us that companies can change direction and we have no say over whether that happens. So there's sort of a feeling of disempowerment when we see this inconsistency. But there's also sort of a question of where are these decisions even coming from? They sort of emerge from the ether as though it was always going to be thus. And I think it demonstrates three big things. It's either a lack of values, it's a lack of clarity about those values. So how do those values translate into actual decisions or it's a lack of commitment to those values. So when confronted with an incentive that takes you this direction and your values would take you this direction, you choose this way. And then I think important to mention before I start talking about what consistency looks like is that if one of your values is optionality, you will never value anything else. You will never go in the direction of your values if literally you maximize for optionality. And this was a quote in a piece this week about Google and I don't know how wrong they could be to have two people on two different teams have that distinct of a perspective on how data in a project is gonna be used but I think it says everything. So I wanna talk a little bit about, so those are sort of very public facing inconsistencies across time. So the same decision taken two different ways over a period of time. But there's also tons of other types of decisions that we make all the time. And there's four sort of big buckets. There's obviously different ones but just to sort of get a handle on what I'm talking about and how I talk about consistency. So I'm looking at service optimization. So what are we building for who and by building for those people who are we not building for? So the really explicit choices that we make about the services that we build and what types of choices reflect what types of values. The second bucket is redress. So if somebody is affected, let's say for example the Twitter or Facebook real name policy. If somebody is affected by that policy what is the redress process for that person? How do they report? How are those reports responded to? What decisions does a company or an organization make around that? The third big bucket is around data and inference. So thinking about not just the units of data and information but also the sort of increasingly analytically derived profiles about us, how those are managed, how they're destroyed, how they're sourced. And then thinking about monetization. So what do we allow to be monetized? Sort of going back to this, maybe it's a slippery decision of what is monetized when and sort of how do we communicate about it? But that's a whole other bucket is sort of thinking about how values transfer into decisions about monetization. Those are four big areas and just taking a really specific example when thinking about consistency of those types of decisions. If you're in a financial institution let's say and you've been asked to build an algorithm of some sort that will make recommendations to a frontline bank teller. So customer A walks in and says hi, I'm here to deposit a check. This recommendation algorithm would say, okay, while they're here, you should recommend one financial service for them. And this is the financial service that this algorithm would recommend. Here may be a couple other options if the teller happens to know something about that person that the algorithm doesn't know. How would you optimize for that? So let's say you have two different product teams. One of them says well, of course, we would optimize on the likelihood of uptake. So we wanna recommend the financial service that we think is more likely or most likely to be taken up by the person, right? The other team says no, no, no, no, no, that's not within our values. Actually our values say we want it to be a recommendation that will have the most positive impact on that customer's future, let's say financial wealth or something. So the way that those kinds of decisions get translated into, in this example, data science problem or project has huge impacts on the way that an organization expresses its values through its technical decisions. So what does consistency look like? So it increases alignment between values and actions. So there's all these companies that are talking about here's our 10 principles for X or Y or here's our world view on Z. But what does that actually mean in practice? How does that actually influence the choices they're making? And consistency is the sort of middle layer that says this is how we translate those things into decisions. It also guards against mistakes. So if you have a level of consistency, that means that you're not leaving everything up to chance and probability and individuals to say whatever, let's do this this time because it creates a sort of institutional memory and an institutional set of expectations. And in turn it creates clarity for teams and users. No team wants to be in a situation where every project they're doing, they have to consider the entire scope of all of the ethical dimensions of all of the work of a company. So what would it look like for teams to go into those kinds of projects and conversations with at least some foundation of this is a project that's similar, that we did before, these are the decisions that we made, this is why, these are the kinds of things that we think you should be thinking about. And then it also can be measured and improved. So one of the big challenges of saying we're gonna improve well-being or ethical whatever of technology, it has the benefit for some people of not being measurable. So you're able to say we, you can't, it's a counterfactual to say this was a better decision than this other thing because you prevented something from happening which makes it very difficult to sort of show and measure progress and success. So what's cool about consistency is that there's actually methods of measuring it. You can actually see if product teams take those values in the same way which means that you can get better over time as a team in an organization. And I wanted it, so I think mistake is maybe an aggressive word to use here but I think talking about the types of mistakes an organization makes is really important because I think a lot of times we conflate incompetence with evilness and I think sometimes it's important maybe not to confuse those two things but Atul Gawande who wrote the checklist manifesto which is about how in medicine we take complexity and we sort of reduce it into sort of steps because there's so much history of medicine you have to know so much about so many different fields to get certain things right that breaking things down into checklists is revolutionary for medicine. I don't think checklists are revolutionary for technology for what it's worth but I think his perspective on mistakes is really interesting. So he breaks errors down into two types. One is an error of ignorance which is mistakes we make because we don't know enough and the other is errors of ineptitude. So mistakes we make because we didn't properly use what we know. And I think making those two distinctions can make a big difference in how we diagnose and then treat the problems that we see within technology. And I want to, so and those two errors I think are really important to keep in mind when we think about the sort of components of consistency. So I'm not gonna walk us through four sort of big bucket aspects of consistency that you can think about when you think about whether or not your organization is consistent with the way it translates values into decisions. So the four here are diversity, reflection, comparison and iteration. In diversity, I realized there was a point made at the earlier keynote today that diversity isn't enough, we need equity and inclusion. This is actually purely about diversity. So it's purely about representation of different perspectives, not the underlying sort of moral prerogative of having representation actually mean equity and power within an organization, just to clarify. So I'll start with that. I mean, I, so I don't know how many of you have heard this little tidbit but I find it fascinating every time I think about it. And it's that the human brain can receive or has in its environment 11 million bits of information available to it every second but we only process 40 to 60 bits. So the amount of arrogance to think that we know what decision to make in any moment is shocking. I mean, we're sort of walking overly confident creatures. And when you think about diversity, so essentially if you hire the same type of person over and over and over again, you're essentially committing to only as a team being able to process 40 to 60 bits of information. You're literally refusing to add additional perspectives to be able to see different types of bits of information. And so it's sort of being like willfully incompetent to have a non diverse team. So I think when thinking about consistency, I think about diversity, I think about three big areas. One is diversity of lived experience because that's obviously critical, diversity of expertise and then diversity of thought. And diversity of thought has increasingly been used as shorthand for being tolerant of people like James Deymore. So I'll be a little clearer about what I mean by diversity of thought. And I think we all, sometimes I think under estimate the amount we all think really differently. Like on project teams in my head, having managed many people, the blue sky thinker versus the devil's advocate versus the pragmatist who's like, let's just start, why are we not starting, let's do this. Versus the resource obsessed person who's like, how are we gonna pay for this? Where's the time? We don't have the time for this. And it takes all those kinds of people to sort of make the world go round. So thinking about diversity, not just in terms of lived experience and expertise, which are both critical, but I think also about how we approach problems because I think in the same way, we're prone to hiring people that look like us and have a background like us. We're also really good at hiring people that think like us, which really limits our ability to see more of the picture than we otherwise would by ourselves. So that's diversity. The second component of consistency I think of is reflection. So a lot of times people hear the word reflection and they're like, oh, so that's like my personal feelings about what I wanna do and why and sort of an exploration of self or something. It's mushy, it's individual, but that's what technology design is. We're taking next to no information and stabbing in the dark at like something based on all of our biases and all the things, all of our preferences. So this idea that like as a team, we're objective because there's more than one of us is crazy. So another sort of thing to think about within the context of consistency is having real space for reflection and not just this is fun, we're having a conversation, but like what questions do we ask at what phases of the project? Who has to be in those conversations? What does the reflection look like? How do we document it so that we can go back and see if we were wrong and then figure out why were we wrong at the end? Like really baking it in. And I think teams have been really good at introducing reflection at the end. So I see a lot of organizations do retros now, which is fantastic because it actually gives people time to talk about a project outside of the pressure of delivery, but it also is too late to fix anything. So maybe thinking about aspects of reflection that we can include in other types of project phases and be really consistent about it because it's a really important component of figuring out what we don't know. And then the third is comparison. So I was really excited to hear a group earlier presenting on qualitative research because I think that's actually a huge thing that we could use in technology production that we just ignore. So we're confronted with lots of subjective information and make a ton of subjective decisions and then pretend that it was always gonna be like that and it was really math, which is insane. So when thinking about comparison, so I find intercoder reliability a really interesting thing to think about when thinking about product teams. So intercoder reliability is if you have a researcher and they're studying documentation that is very subjective. So let's say it's transcribed interviews. So it's somebody waxing poetic about their thoughts and feelings about a topic. How do you turn that into science and evidence and facts and then write academic papers about it? Well, how it works is you have multiple research assistants that code that content and then you can actually using those codings compare and say, oh weird, so that person gave it a this and this other person gave it a this on that. There's something wrong with the way I'm asking them to do this thing. And intercoder reliability is a way of testing whether or not you've given clear enough instructions to translate the subjective into something really concrete. And if you think about it within technical teams, imagining two really subjective scenarios that are identical or given to two different project teams and you say, hey, so how would you score this on level of sensitivity of the data you're using? Importance of figuring out how X community would feel about this product before you launch it. All those types of variables that are pretty subjective and difficult to pin down. And you ask both groups to rate those based on your organizational values and you have radically different interpretations of what that project would require to do well and do ethically and to do and align with your values. Then you have a serious problem because essentially two teams have interpreted your values in a way that are completely different. That means that you're not doing a good job of communicating them. You're not making it clear enough in terms of how they should be thinking about them when it comes to really specific decisions. And so intercoder reliability, I think is an underused technique that's super common in the social sciences but has been largely ignored by the mathy. And then the last is iteration. This one's pretty obvious. You can't get this right first try. And if you think that you did, then you're really bad at it. So I think thinking about as you're testing them, updating your documentation, your guidance, giving flair and story to the types of decisions that you want people making. So if you feel like there's a scenario where you messed up so badly, right, taking the time, write that down. Say, this is why we think we made these mistakes. This is what the mistakes look like. This is the harm the mistakes cause. This is what we're gonna do differently next time. So really taking it seriously, this is a learning process and something that you only get better at if you keep paying attention to it over time. And what does this mean for you? I mean, I'm really hopeful that organizations that are increasingly interested in values are realizing quite quickly that it's not enough to just say, don't do evil. Cause that's like, it's just super vague and it's not helpful when it comes to actually making choices. And so until we start sort of making that link between that sort of sky level thinking and the sort of earth level weeds, I think it's gonna be really difficult to see change in the big companies around us. But I think it's also a really amazing time for organizations, smaller organizations, civil society organizations, and those that use constituencies expect more and should expect more from them to demonstrate what we wanna see in technical production to look like. So when we look at diversity, not only what type of diversity do you want in product teams, but what type of diversity do you have to have in terms of consultation and conversation before you actually launch something? Cause the hubris of making something digital and then just sending it out to a billion people is, it's insane. And unless we show why that level of hubris is unacceptable, we're not gonna get anywhere. We're gonna sort of be on the outside sort of throwing rocks at people that have power. And I think we can actually demonstrate the power of these kinds of approaches because I think it means you build better technology, people rely on it more, and it's more in line with the society that we wanna be building. So I have time for questions, a lot of time for questions. But generally speaking, I think the devil's in the details, and I think we don't wanna be Twitter, like I wouldn't wanna be in Twitter's position right now, but I think we also, probably everybody in this room could do better than them, and I think part of that starts with being consistent.