 Before I start, I do wanna say a deep thank you to Cary Anderson, Ruben Langevin, Dan Jones, and Ellen Popko, and without them, this event and many other, is this going in and out? I'll try this, this event and many other events at Berkman wouldn't be able to happen. Okay, so welcome to the honoring all expertise social responsibility and ethics and tech. There's a lot of work going on in the space of ethics, tech, and social responsibility. Many here at Harvard, ranging from what Aiden is doing at the Divinity School, initiatives at the Harvard Kennedy School of Government, research at the Cyber Law Clinic, new case studies being developed today for tech and ethics at the Harvard Business School, computer science and philosophy working together and partnering to embed ethics in computer science curricula led by Barbara Gross and Allison Simmons. There's Nathan outside of Harvard, there's Nathan Matias's work at Princeton Bridging Social Science and Computer Science. There's Casey Feisler, mobilizing people around the world to look at ethics syllabi and data science and computer science, countless codes of ethics initiatives, the Center for Humane Tech, and just right now today, Theresa Tabriz and others are running the R Security Advocate Conference in San Francisco right at this moment to talk, one of the things they're talking about is security policy and ethics. So there's a lot of work going on in the space and it's definitely not a comprehensive list. But today, we're here to talk about the work and research of the ethical tech working group at the Berkman Klein Center at Harvard. Some of us collaborate with some of the other people that I had mentioned earlier as well. We're a group of computer scientists, race and gender scholars, ethnographers, historians, lawyers, artists, social scientists, political scientists who get together week to week to discuss and debate topics of social responsibility and ethics in tech in a very open and supportive environment. We have a deep and profound, we've seen the deep and profound value of different expertise in the room over a period of time, discussing and debating issues from different angles. Where all expertise are equally honored and equally valued and we wanna share with you a little bit of the essence of our group, a group that I have grown to love very dearly over the course of this year at the Berkman Klein Center and then to open discussion with you all as well at the end of this. So this is our agenda for today. We'll do a series of lightning talks and we'll leave about 15 to 30 minutes for discussion at the end of this and Carrie has mentioned that we can go into about 115 if the discussion has us going further. So with that, we'll kick it off with Luke who will tell us more about ethics and codes of ethics. Anything that's really hard to read off is a group of scholars, friends, collaborators and this has been, as Kathy was saying, a fantastic experience for me and I've really learned a lot from it. So my name is Luke Stark. I'm a historian by training currently in the Department of Sociology at Dartmouth College and also a fellow here and this is a talk about hammers. Here's a hammer. Does anyone know what kind of hammer this is? Exactly, it's a ball peen hammer, good. It's also a talk about ethics and digital technology but I often like to perform a kind of simple thought experiment when I'm thinking about new media, new digital technologies. We just replace the technology you're talking about in your mind with the word hammer and see what you're saying sounds like, right? Ethics and hammers, ethics in hammering, okay. And this point isn't to put down or slight ethics or digital technologies or even hammers but to help me remind myself what's novel about digital technologies and what might not be so novel. So our contemporary sense of the word ethics comes from the Greek word ethos. An ethos is a moral, a habit, a character, a disposition, a custom and as you probably are all aware recent public controversies around the collection, analysis and publication of data sets about sensitive topics like say, Facebook collecting 87 million profiles or when giving them to Cambridge Analytica have helped push conversations around data ethics to the fore and in popular articles and emerging scholarly work, scholars, practitioners and policymakers have begun to try to flush out the long standing conceptual and practical tensions expressed in the notions of ethics and digital technologies and I should say that there is a large group of people, many of whom are in this room who have been doing this work for quite some time for 20 to 30 years, really since the beginning of computing, I have a book in my office that's a bibliography of ethics and computing from 1978, so it's not that this hasn't been something that people have been thinking about but it is something that now has new prominence. What many of these conversations have focused on the development of codes of ethics for data scientists and if you build on that my earlier definition of ethos, a code of ethics is the ethos of a particular group codified or transcribed. As Jacob Metcalf and Kate Crawford observed in a recent paper, ethics codes serve a number of functions beyond determining, deterring unethical behavior or determining unethical behavior, including the creation of a cohesive community identity, responding to external criticism and establishing a moral authority for self-regulation. Today there are many codes of professional ethics for many different professions including computer scientists, software engineers, statisticians, even plumbers and fortune tellers. The fortune teller code of ethics is quite short but it basically means it says don't make anything up. Some codes of ethics are more binding than others. So the common rule which regulates federally funded research on human subjects is one of the most famous and commented on these days as a potential model for computer scientists and the common rule in essence seeks to define a community of knowledge production that includes both professional researchers and ordinary people. But as Metcalf and Crawford point out, the line between professional researcher and ordinary person is in the context of digital technologies getting really blurry, right? One reason contemporary codes of ethics for digital technologies don't seem to have much teeth is because there's no specific shared community uniting the users of digital technologies. It's a bit like having a code of ethics for wielding a hammer. Now of course, we do actually have specific codes of ethics backed up by a legal sanction for the people who use hammers professionally, right? These are structural and civil engineers who can be held criminally responsible for flaws in the design of buildings. And we have a more general social ethos that regulates the general use of hammers and hammering, right? It's the same complicated messy system of social trust and law, here we are in a law school, that shapes how our society does all sorts of things. So there are certain technical affordances of digital technologies that make them quite different from hammers, but maybe not so different, right? And so if you take anything away from my talk, it's that we need to be careful thinking about what really is novel about these technologies and what isn't when we develop social policy and legal responses to them. So I'll leave you with two questions. First, how do communities of digital practice relate to both specific ethics codes and existed written laws to make sure that they use these new hangled, sorry, new fangled hammers appropriately and properly? And how do we as a more general community tackle these new devices and all the disruptions they bring in a way that is consonant with our own shared ethos, so at least what I hope is our own shared ethos, that of a free and democratic society. Thank you. Hey everyone, I'm Ben Green. I'm a fellow here at Berkman and a PhD student in the Computer Science Department. And I wanna talk today about my experiences trying to do socially and policy minded work here in the department. Oh, yeah, there we go, that's me. So when I came to Harvard about three or four years ago, I was full of naive optimism about what the field of computer science actually was. I, as a optimistic undergraduate, I thought it just meant a lot of people doing cool stuff to solve social problems using their computational tools. I quickly found out that was not the case. It's a technical field where they're far more interested in mathematical and engineering analysis. But what was particularly disturbing for me as I entered the field was to see the actual dismissal of non-technical voices and non-technical perspectives in the field. I had one experience where I heard a fellow graduate student of mine scoff at the idea of a social scientist being an actual scientist. And I had several conversations with faculty members in the department where they told me that the work that I wanted to do that was socially and policy minded was not computer science and wasn't worth doing. And I think that this is an incredibly dangerous and irresponsible perspective for a field like computer science to have because increasingly the tools of computer science being used to make some of the most important decisions in society today. My own research focuses on the use of data science and machine learning algorithms within local governments, specifically in city governments and the criminal justice system. And these are incredibly consequential decisions and yet we often see computer scientists entering the fray, wielding these tools as in believing them to just be a neutral source for good without fully understanding the context that they're entering into. And so this has led to situations of algorithms that make racially biased decisions, algorithms that actually warp and change the context that they're trying to be applied to, and algorithms that make decisions about people without the researchers behind those algorithms having fully grappled with the human impacts of their tools. But I'm also optimistic because I see the field moving quickly in the right direction. I even as many faculty members in computer science are still grappling with the social and political aspects of their work, students today in particular recognize intuitively and from experience that computer science is far more than a technical field and has increasingly and incredibly important social and political implications. So I've had many conversations with undergrads and grad students who are eagerly looking for opportunities to use their technical tools for good. I've talked to many of those students are in this room right now and I've created a list of social good tech jobs that I regularly share with students. Even in the computer science department just in the past month we've started a CS and ethics reading group. Many of the folks are here today and then that first meeting a few weeks ago we had about 20 students at that meeting and even two years ago I couldn't have imagined wrangling a group of maybe more than five to go read a paper that wasn't for a class and sit in a room and talk about this stuff. So I'm excited to see a quick change in the perspective of the field especially from the students today who really understand the need for broader perspectives within the field of computer science. So I'm optimistic about what the next several years will bring and I think that this ethical tech group in particular is a perfect example of how computer scientists can work with people across many diverse academic and non-academic perspectives to actually ensure we have a more positive future of technology. Thanks. Computer scientists who has spent more than the last decade in industry and I came to the Berkman Center because I wanted to think more deeply about the social responsibility of engineers and product managers and basically technologists as we build and ship products having felt a bit complicit in the state of the tech industry today. So I started my career at a big tech company. It's a place that is fun and or perceived as fun and open and transparent and we're all gonna work together to what Ben said to make the world a better place in this very idealistic sentiment and it's a place where you have full presentations across the company and to executives with completely in means alone because it's fun and it's great and it's this happy place where you build good products that's gonna change and save the world. And then I went and spent a little bit of time working on the contracting side where I helped companies write contracts for big enterprise companies and the federal government and also got to see how writing contracts without having users in mind and then building to those contracts end up producing products that actually cause a lot of harm and building services for government services that don't work for the people. And the flip side of that, I went, so after that I went and worked inside the federal government itself and got to really see that at a deep level as well and while I was in the federal government at the United States Digital Service, several of my former colleagues worked on the healthcare.gov failure where you really got to see the power of several really devoted engineers, product managers, designers, their power to really turn the tides where $800 million and 60 different contracts couldn't do. So what that really got me thinking about is the power of individual contributors and engineers, people who have our fingers on the keyboard to really make a difference. That's not to say that we shouldn't hold leaders accountable, that we shouldn't change our procurement process, all of that matters as well. But what I don't hear talked about as much and this is what Ben alluded to as well is to really think about how we can shift our engineering culture, how we train and teach our engineers and really impact folks at this level and many of us in this field we go through programs for our computer science where we become product managers at these companies we make decisions that significantly affect the direction of these products without any training in social science or art or design or really this deep understanding and empathy for users in the way that Luke and Mary and Joanne and Dua and Jen and Salome and Jenny will tell you about we don't have that kind of expertise and there's even this culture of end versus non-end and Ben also talked about that and we didn't coordinate at all and it's just there is end versus non-end or like SWE which is software engineer versus non-SWE it's how people talk in companies. Jay-Z just told me the other day that one of the companies he was at you were either engineering or support and it's a culture that is pervasive across companies and teams and it makes it so that we build in these silos where technology rolls we move fast, we make products that are efficient but we don't think nearly as deeply about the social impacts, adversarial attacks, privacy, security and there definitely teams that are thinking about that I mentioned the RSA conference that is going on today but for the most part most teams don't think about these problems as deeply and we can learn so much from having groups like this that get together often on product teams across companies to think about how we can build products while honoring the expertise of people who have deeply, deeply thought about how users interact with products and humans and build more responsible software for what we do, so thank you. I so clearly want to be a Cathy fan that I just. So hi, I want to put in your minds a way of rethinking what data are. Rather than thinking of them as something that pre-exists or that they're raw and plenty of my colleagues have critiqued that idea I want us to think about what exactly are social data the kinds of things that we're looking at in both industry and academia and what exactly we're doing when we sift around through online data to study social worlds. Are we just sifting through data? Are we observing public interactions as though we're walking through parks? Are we perhaps watching or listening in on other people's social lives without their knowledge? Or is it all of the above? And herein lies the challenge of how much industry and social sciences need each other and the fact that we don't discuss that when we're collecting data we're actually interacting with people's social lives in some way or another. To advance either industry, the products we develop or to advance our understanding of the human condition we actually deeply need each other. The social sciences for the most part are great tools for building theories about how the world works and how we understand or make sense of the world. So that's anthropology, political science, sociology. Humanities like philosophy, science, technology and studies are also meant to help us understand how the world works and to theorize that. Computer science, we know it's great at building tools that map and measure. We can perhaps see a network and see how it connects over space and time. But we deeply need each other's approaches to looking at how the world works to be able to develop a social world that's both informed by technology but not dominated by it. We need new tools for coming together to study this world. The social sciences have never been able to span space and time in the way that you can by looking at say a network map across the world in real time to see how many people might be tweeting this event, which is a social experience. But we also need to figure out the limits of those measurements and how we might go about understanding what does somebody mean when they retweet something? Is it agreement? Is it disagreement? Is it performance? Is it something deeper? Is it all of the above? We literally don't have the methodologies within social sciences to do that. But computer science doesn't have the training to see that as a social expression rather than a data point. At the end of the day, we have a great amount of responsibility and the great amount of responsibility we carry requires interacting with people. It means getting the general public to be as invested in learning about the human condition as social sciences might be and willing to let technology be part of that exploration. That's not a small ask. We need more than ethical principles or codes. We actually need research practices in both industry and social sciences. And those practices have to be able to collaborate across disciplines that literally until this decade have not really talked with one another. Literally they're set across different parts of a campus. So we're at the very beginning of learning how to study this deeply social world that is absolutely shot through with technology but not technology alone. And if we want to understand this social world, we need the general public. And I'll leave you with this. At the end of USENET today, the two groups that moderated USENET groups blocked were journalists and researchers. And if you look at your own practices today around how many of you might cover up your cameras or do somewhat adversarial practices to keep technologies from spying on you, we're heading toward a day where the general public doesn't trust technology to be a part of its life. That will shut down our capacity to understand our social worlds and the worlds we create online. Thank you. Hi, everyone. My name is Joanne and I am a artist and a designer. So when our working group first brainstormed the topics for today's luncheon, I was originally going to talk about empowering users. So you know how sometimes you look at a word and you stare at it for a long time and it starts to look really weird? The longer I thought about the words empowering users, the more I realized it's the context that necessitates the solution that's actually a part of the problem. So why is it a species who invented writing, books, computers, flew to the moon, feels the need to be empowered now? I think the problem isn't that we inherently lack power. It's that when we represent ourselves as users, we limit the power we think we can access. So what's wrong with our context? When thinking through contextual problems, if you look immediately around the problem, you're just gonna find more problems. I find it helpful to compare our context to another one from a different place in a different time. So we're gonna take a trip to great contiguity and compare users with another way of representing ourselves, muses. Muses are goddesses in Greek mythology. And my point here isn't to say that we're transcending humanity to become goddesses or cyborgs. What we take to be mythology today actually serve very specific and practical functions back in the day 2500 years ago. And these functions reveal the context of the time, which we can compare against ours and see what we're missing. Muses embodies all domains of knowledge, music, science, geography, art, poetry, drama, astronomy. The etymology of the word muse is song, which makes sense because this is before books and the media of a knowledge transfer back then was public recitation. And these recitations took place outdoors. Plato's Academy was a group of olive trees. This academy was one of three gymnasia. So each gymnasia represents one school of philosophy. And that kind of gym is totally different from the kind we go to today. Back then, physical movement and education were one and the same. So everyone learned the same place, they learned together. And if you follow Aristotle's peripatetic school, you go for walks together. And the Renaissance was actually very much inspired by Greek antiquity. And this is when Raphael painted the very famous fresco, the school of Athens. But there's a very important contextual change. So here in the Sistine Chapel, knowledge that used to be oral history is now permanently fixed onto the surface of the building. Making knowledge centralized and transcendent preserves the institutional power of the church. People can walk around the chapel, but they can't touch anything. So here, thinking and physical experience are kept separate. And half a century later, with the invention of the printing press, thinking and touching became connected. So now you can hold in your both hands a book that contains all of the knowledge and knowledge becomes democratized. But the downside is that you're now reading alone. And today we're more alone than ever and we're more disembodied than ever. To use a phone, you don't even need the whole hand anymore, just two thumbs and an index finger. And the user really is just two thumbs and an index finger. This worldview represented by muses has been in decline while the worldview represented by users is increasing exponentially. So you see that intersection right there around 1900. I think that intersection represents for me what Adam Curtis calls the century of the self. So the 20th century is a century defined by advertising, by mass consumerism, by the rise of PR as an industry, by the commodification of everyday life. And the 21st century is user-driven, user-generated and user-centered. Instead of accepting the century of the user, I hope we can build a context for being together. And what do I mean by this? I wanna recycle some wisdom from the muses to illustrate this idea. So being together means when the mind and the body move in concert. So muses recognize that human power is whole. It's not body parts and is certainly not finger gestures conditioned by devices. Being together means considering human across many scales. So human-computer interaction mostly studies this individual interaction between one person and their tool. And I think we need to study all of these interactions that scales of groups and communities. Being together also means that there are many domains of knowledge that are in conversation. And as humanity makes progress and as we accumulate knowledge instead of storing each domain of knowledge in a drawer, I think we should put it all on the table and have discussions about them. And that's what we're doing today. Honoring all excerpts. So thank you, Kathy, for organizing this. Thanks for being here. You should, I don't need to use the microphone. Hi, everyone. Good afternoon. My name is Dua Abueliones. I am a doctoral student here at the Law School and a fellow in the Perkman Klein Center. I'm here to talk to you today about the need to bridge the legal profession and the tech world. And I'm interested in this topic fairly because of a selfish reason. I'm a doctoral student who is trying to write a dissertation about criminal justice in artificial intelligence. And while I'm very lucky to have a great committee who supports me in what I'm doing, it was very hard for me to find someone who is an expert in both the world and help me to basically integrate these two distinct words together. We are seeing increasingly more and more courses in the Law School that are focusing on different aspects of the technology. Some of them are very specific and even intend to teach lawyers how to program. These initiatives are great and I hope we'll see more of them in all Law Schools across the country. But what I want to argue today is that a more inherent modification to the legal curriculum is needed to face the new challenges that technology brings to us. So the way I see it, especially the black law courses that we are all forced to take as a first-year law student, need to have some technological aspect in them. And I will demonstrate. For example, criminal law, how that course can be taught without mentioning the risk assessment tools that are taken over some of the work of judges. Or for example, predictive policing practices that are blurring the boundaries between the innocent, the suspect, and the convicted. Or constitutional law, how that course can be taught without mentioning the impact of surveillance technology on the First Amendment or the impact of online speech on the First Amendment. Maybe you're interested in tort law and how that one can be, we will soon be driven by automatic cars and how can we teach this course without paying attention to accidents that these cars might cause and who is liable for any mistakes that technology will make. Now these problems go way beyond academia and affect whatever we'll choose to do in the future as lawyers. As in-house lawyers, we will have the opportunity to shape the product, the design of the product, if we'll be involved early on and avoid expensive litigation, although who want to avoid litigation? As lawyers that are involved in litigation, if we have work for big companies, we'll be able to hire the best experts, but still we need to prepare the experts to testimony in court. Maybe if work for the government we'll be able to help shape the contract and the procurement of the tools that are mainly developed in the private sector. And of course, as judges, we need to have some basic knowledge in order to wait the expertise of whoever is standing before us. These cases are not hypothetical. They're starting to rise and they will grow more and more. Probably some of you are familiar with the Loomis case out of the Supreme Court of Wisconsin. Loomis challenged the use of risk assessment tool in his case. The court approved the use of the tool, but what was striking is that only the concurring judge mentioned the point that the court has no expertise, no knowledge about how the technology work. And apparently in oral argument, the judges ask the state and the defense lawyers questions about the technology and no one was able to respond. North Point, the company that developed campus, the risk assessment tool at Dispute, they even offered to submit Amicus brief to the court and explain more about the technology, but the court rejected that. These cases are going to, as I said, arise more and more in the future and we all need to be better prepared. Thank you. I'm Jen Halen and I'm a fellow at the Berkman Klein Center as well as a doctoral candidate in political science at the University of Minnesota. So my research focuses on the idea that policymakers, legislators, technology procurement officers can decide to use technology in a strategic way that ultimately can get them closer to what they want the world to be like or what they want a policy to result in. So adopting a technology isn't solely about what that technology can do, but it's about fitting it into a more complex puzzle of human interactions. These things are often adversarial in a political context and you need to understand what motivates different individuals within that process to ultimately understand what the outcome is. Social science helps to train students to understand human patterns and system behaviors by applying tools of scientific inquiry to these phenomena even when they're very messy, but in a way that ultimately results in patterns and a knowledge to build upon in order to better understand these phenomena. Political science focuses on understanding these mechanisms and patterns of control and influence and when and why and how people use those. These are all necessary for understanding problems like the risk assessment tools used in criminal justice because in order to properly understand what the result will be, you have to understand the deeply entrenched history of racism and socioeconomic prejudice and other very human systems that are shown in our data that are going to be influenced by utilizing a new tool in the same system, but this requires both that knowledge as well as more of an understanding about what fundamentally the technology is going to do. I encountered a lot of hurdles institutionally when first approaching these areas. I was also told very similar to what Ben heard, this is not political science. And so I think that that's where part of the problem comes up is that there isn't a clear discipline for these things to fall into and we need to encourage students to gather that's breadth of knowledge because these are new areas that don't fit neatly into the structures that we've already created in academia and elsewhere. I ended up arguing that I should be able to take computer science courses as a language requirement because I would learn Python and later Java. I don't know if anyone's ever tried saying computer code in parties, I wouldn't recommend it, but it did allow me to get the required knowledge that I needed for this dissertation and is something that more and more centers and academic institutions are starting to encourage, but I think that we need to really focus on taking down these hurdles for students as we're preparing undergrads to go into a variety of careers where that social and technical component are going to be more and more necessary. So we need to be able to provide students with the opportunity to use the tools and understand statistics and coding and things in a practical manner, but we also need to more deeply ingrain our study of technology and its influence in human systems into other topics that are important for them to take away and apply to their future careers. And I think that we need to reconsider some of our current educational structures in order to do so and to give those students the best opportunity possible. Going backwards. Okay, there we are. My name's Salome Filiun. I'm a lawyer and unsurprisingly, I'm going to be talking about regulation. So I think a lot about how we as a community can encourage the creation of the best versions of technology production. And I think one way to achieve a more holistic, even collective version of how we do technology is through regulation. So regulation does not have to be scary. I sort of think of these two quotes as a guiding principles of regulation. One, don't let the enemy be the perfect if the good is from Voltaire. And the others we come in peace comes over and over when alien life forms interact with human life forms. But the two quotes together sort of embody the spirit of what I think we should carry with us when we talk about regulating. One of two sides engaging with one another in good faith and a willingness to learn and sort of listen to one another's expertise languages. And then the other one is the willingness to realize that the 1.0 version of something is rarely the final version, right? It doesn't mean that it isn't worth making and using as a starting point. So sort of where to begin. This is a very blurry picture, but it's a picture of someone shooting themselves in the foot. I can't address all of the complexities wrapped up in our regulatory state or a taxonomy of regulation or a critique of where regulation fails in four minutes. So I'm gonna start with the assumption, the stereotypical assumption that all regulation is bad and not even that, that regulating is like shooting ourselves in the foot. We take out this tool and we think it's gonna fix a problem that exists out there. And then by using it, we not only don't solve that problem, but we hurt ourselves in the process. I would posit that maybe we have already shot ourselves in the foot. And now we need to go about figuring out what we're gonna do about it. So how do we not only just care for the foot that we have shot, but also ensure that more feet are not going to be shot in the future? And one solution I would suggest is starting to think meaningfully about regulation. Something that we did in our past and we were much more willing to use as a tool and something that I also would argue we've used really ambitiously in attempts to regulate that weren't just failures, but enormous successes. So we have taken in the past rapidly growing industries that were vitally important to the US economy and that also had big negative external effects on our society and we've regulated them successfully. And the example I'm going to use today is the EPA. So everybody loves to hate the EPA conservatives for the fact that it exists at all, progressives for the fact that it doesn't go far enough. But in fact, the EPA has been enormously successful in regulating pollutive industries to make them healthier and safer for the environment and they've done so in a number of ways. So where were we when the EPA started? Well, rivers were on fire and cities, that's Los Angeles down there, cities were just swathed in smog. And yeah, this was clearly an unsafe production ecosystem. Like literally our ecosystem was unsafe. So how did the EPA tackle this? Well, they used a lot of different regulatory tools that I think we can learn from today. So one, they used what we call proscribe and proscribe behavior, which is the banned use of pesticides like DDT. They mandated safety practices like secondary spill containment for oil storage facilities. They set standards. So the EPA sets safe standards for drinking water and they directly manage 160,000 different water systems in the US. They work with states, local governments and water suppliers to enforce those standards. They also calibrate incentives with sticks and carrots. So a stick is what was historically known as the Superfund where they would get taxes from petrochemical industries and use that money to do hazardous waste cleanup. This is known as internalizing an external cost. And carrots like encouraging solar and renewables with programs like the Green Power Partnership. They also did what we call preference shaping and Energy Star is a very successful example of preference shaping. It's a voluntary program that provides these simple, credible efficiency standards and it's totally voluntary. It's just providing sort of a standard and information for consumers. The EPA estimates it's saved about 14 billion in energy costs since 2006 alone. So in terms of getting lessons from the EPA, just sort of big picture takeaways, regulation doesn't have to mean the difference between no innovation and innovation. It can be the difference between safe innovation and unsafe innovation. So internalizing the costs of unsafe heavy manufacturing product levels the playing field between unsafe innovation and safe innovation like green energy. So it doesn't destroy all growth. It just influences the direction of growth in a new and innovative field. Anyway, I have lots more I could say about regulation but I'll wrap it up there. So since Dean and Boaz couldn't join us today, I'm going to channel them. They laugh because anyway, so I won't go into that. I want to channel them and basically read what they sent me to say. Boaz is a founder and CEO of Boku, a company that has driven open source integration at Fortune 500 companies. His hope was to make the web more open but he is currently rethinking this approach. Dean is executive director CEO of the Participatory Culture Foundation, a non-profit whose early focus was building open source software to drive media access in the pre-YouTube days. Today they support individuals, communities and ecosystems that make the media accessible. Throughout the early 2000s, Dean and Boaz are both deeply involved in the free and open source movement and they are critics in the working group against open source being a source for good and these are the points they sent to me. Reflecting on the original aim of the free and open source software movement to deeply empower individuals and put them in control of their computing destiny. Despite open source software being core to our modern computing infrastructure, the social benefits from this movement have not been distributed evenly. The largest beneficiaries are corporations and shareholders. On the other hand, the lay user today has less and less control over their digital selves. The move from personal computing to the cloud has exacerbated this lack of control with more and more processing and data storage happening in corporate data centers. And finally, calling into question the notion that building open source software is in and itself a social good. I know both Boaz and Dean have made it so that many of us have thought very differently about open source and it's something to really think deeply as we continue to move in this world of open is always good, which is something that we hear quite a bit. So I definitely reach out to either Boaz or Dean if you want to learn more from them and with that I'll turn it over to Jenny to wrap us up. Hi y'all, I am Jenny Korn and I am a critical race scholar. So what that means is that as an academic, I look often at the output of what programmers give me and I critique what they have done. For example, this is an example from the artwork that I shared at BKC, which is based on online image searches. If you put in the word professor, you actually get these results, which are not only white men, they're cartoons of white men. That means cartoons of white men show up before anybody looks like me does. Right, that's kind of funny, also kind of sad, okay? So there's a problem with that, that both women and folk of color are omitted from algorithmic results. In addition, another example is with the, hey, yeah, with the spread of facial recognition technology, we have to ask ourselves how might facial recognition technology be used in surveillance, particularly against folk of color, people who protest in racial justice movements? How will that change the relationship with police? How will that change our relationship in terms of potential job prospects with future employers? And so what we learn is that the disproportionate impacts of technology will always fall underrepresented communities when we don't actively work to include them, okay? So I'm gonna look at y'all and make an eye contact, I'm gonna repeat that because it's real serious, okay? We have to work to make sure that we include greater diversity from the community when it comes to ethics and technology. And that is the question that I want to ask y'all today, how might we actually do this on a more systematic way? How do we improve social justice and not repeat injustices? So I realized as an academic, instead of focusing only on the outputs of what programmers do and critiquing it, I actually can affect programmers themselves through their training. So let's talk about who is and who isn't at the table. How inclusive are the folk in the rooms in programming and coding and computer science and engineering and open source communities? What we do know is that when we don't actively build in folk of color and actual discussions of race, then we're going to reproduce whiteness. That's going to happen. So how do we make sure different folk are at the table? One way is to promote the training of critical race theory across the academy and industry. So critical race theory holds many tenants. One of the tenants is actually valuing experiential learning. So valuing that all of us up here are experts because we have lived through what technology has done to us, we've experienced it, we can share our perspectives on it. Another tenant of critical race theory is that racism is ordinary. It's embedded in American society. It's every day. It's as ubiquitous as, okay? Now if you don't know why I pushed our books up there, we really need to talk after this, okay? All right. And so another tenant of critical race theory is that we tie theory to action. We actually want to challenge racial power. We want to actually influence policy. And so again, what's one way to do that? Let's require training in critical race theory so that the people who are creating our technological worlds can actually think better about the ethical considerations related to race. So how do we do that? Well, one, instead of like hoping that programmers might want to take critical race theory, let's require it systematically. Let's say that folk who are gonna get a degree in computer science, which looks like this right now, actually have to also take critical race theory. The name of that course? Well, it could be critical race theory in technology. That's applying pressure from the academic side by requiring that folks have to take this course. There's also another side to the way we can apply pressure, and that's from industry. So let's have industry also update their tech job ads. As you see here, if you look at minimum qualifications, everything up on there is technical. There ain't one course that talks about people, okay? There's not one course that talks about people. I don't, y'all don't seem surprised, okay? That's surprising, and again sad. So why don't we actually require that the people who get hired have had to have had training in critical race theory? In that manner, then industry gets to help set curricular standards. It also helps folk like me train in critical race theory to have jobs in the future too, because then we will teach those classes. Now as I close, I do believe I do, that's right, it's an all moment, y'all. It's an all moment, okay? I do believe that all of us have the power and the agency to influence society and to make it better. I truly believe that. One way is to contact universities from which you've already graduated. You can ask them to include critical race theory as a required component of curricula. Another way is if you're in industry, if you're in a position to hire folk, if you're a manager, you can actually once again look for folk that have actually been trained in critical race theory. But the other way is informally, because I really believe in the power of dialogue, communication, casual conversation. All of us in this room now know that there's a field that exists called critical race theory. All of us now know that. Either if you're a student right now, then you can look for that class right now and take it yourself. If you're a student about to graduate, you can tell the students behind you, y'all should take critical race theory. All of us in here can then tell our networks that this field exists, that's important, and that we need to all be trained in this critical race theory so that we can improve society together. That's all. And I think now we'll open for questions. I recognize there's probably a stop for some folks at one, but we can be here till about 1.15. And yeah, we'd love to hear from you and continue our dialogue with others in the room as well. And those are all of our names. They are in the order where we're seated, so if you forgot any of our names or our topic areas, just count down the list. A question for Salon or anyone who wants to address this. So with your EPA example, there are a lot of companies in the space, and so something like the Energy Star rating is a signal for consumers where it actually benefits companies. But in the tech field, there are just a few giants who really can ignore standards because people really don't have alternatives. And so I'm just curious what you think about how we apply regulation in that context. Yeah, this is my gun. Okay, it's hard to tell up here. Yeah, I think the benefit of just examining different regulatory metaphors or analogies is that there are a lot of different approaches. So where there are huge inside players regulation that breaks up some of those players or that looks at just internalizing costs across the board can set standards that apply to everyone. So that even effectively, if consumers don't have any other choice, those standards still are mandatorily placed on everyone. So for example, in Europe, the General Data Protection Regulation kicks in in May, that's gonna apply to everybody. So even if you're a tiny startup versus Google, like you have to comply with GDPR. The other thing that you can look at is antitrust or sort of breaking up major programs. So for a long time, tobacco was controlled by Big Tobacco. Only Big Tobacco had an oligopoly basically on the tobacco industry. After we, the technical turn of the suit, the crap out of them, they had to pay huge fines and it actually was good for, well good for the competitive market of tobacco production because it enabled a lot of mom and pop stores to jump into that market. Now we may not wanna replicate the tobacco market, but it is an example of how regulation that focused on the big players that were causing a lot of the harms that we were concerned about actually fostered a competitive market that favored new incoming entrants. Hi, first of all, this is working. Thank you all for all of your work. I think this is really important work that is being done here. This question is really for any of you, but Kathy and Ben, you mentioned this in particular, the sort of end-on-end or sweet-not-sweet divide. Other than personally valuing this cross-discipline approach, how do you think that people, especially people who fall into that non-edge category, can break down this divide and make sure that there's more respect for all the different disciplines? I'll start and maybe Ben can feed it. I actually think that the burden should fall more on the end side. I respect that those, especially many of my colleagues here, are working really hard to basically find a seat at the table to be on equal footing as many others in these companies. This is just my call out there to my colleagues and tech companies that it's much of our duty as well to bring you all in, or not you all, but folks who are not engineering into the table and to value that expertise on an equal footing, not just as a, oh, I'll have you on the table because it's a requirement, or I'm doing you a favor by bringing you in, recognizing that that expertise is equally valuable versus the non-edge, having this uphill battle constantly to try to get heard. I really think it falls upon us on the end side to really bring everyone else in. Yeah, I would just quickly add on to that. I think that both in the industry and in academia, the more technical community seems to sort of fall into a more privileged place in terms of the status it's given and things like that. I think one place is maybe on leaders within the technical fields, whether a professor or a product manager or something like that to bring in those outside voices and sort of demonstrate through their practice and their leadership that these are important voices that need to be heard. I don't have sort of a great answer, especially as a non-technologist of what sort of someone from that side as an individual should really be doing to get into this space. Maybe someone else would, but. Oh, yes, ma'am. Or we can also just make sure other fields are paid just as much as engineering, so it's on, because, I mean, candidly, sometimes pay creates a hierarchy. If you're paid more, you're more valued. I just one thought, which is, I do think it's important for all of us to be underscoring something Ben just said, to be really explicit that there is a power asymmetry here that has a lot to do with being able to command wealth and value. And so I feel like as much as somebody trained in a non-edge world who works in predominantly engineering spaces, the most important thing I can recognize is that I have to persuade. I can't throw rocks. I can't tell people in computer science you have to do this. That's not working. And so that critical shift has to be persuading those in power and using the tools we have to force change, like the combination. I don't think it's either or. I have a question. So I'm wondering how do the people who are trained in ethics feature in this collaboration, both in academia and in the industry, in your experience? By which I mean trained in moral and political philosophy and not just the IRBs and review boards and the ethical system as it exists today. So, if any of you also, that would be good. Yeah, I think that ethics is a sticky term and we in the Ethics and Tech Working Group have talked a lot about whether or not ethics is the right term to describe what we're thinking about and what we think other folks should be thinking about. And I think one way to provide a synonym for ethics might be, I don't know, cultural social context, right? That it's not just about particular moral systems and codes, but it's about kind of broadening out the conversations within the tech sector and within tech and academia to understand the kind of social impact of their work and to engage with that as something that's just gonna happen. The tech industry is in the world, it's having effects on society. Often companies like Facebook are really happy to talk about their effects on society if they perceive that's gonna make the money or it's gonna get the prestige, but they also have these other potentially negative or complicated effects. And so I think let's not constrain this conversation to political philosophers and philosophers, although we should have them in the conversation, but also sociologists, critical race theorists, historians. I'll put in a plug for my own discipline. And I think, and I actually think that it's important to flip the script a little bit as well and to go to humanities and social science disciplines and say, so here are some things about tech that you might not know. How can you incorporate thinking about technologies and their social context into your predominantly non-technical conversations? That's maybe in some ways an even bigger ask than asking the tech companies to think about social impacts precisely because the humanities already see themselves and often are at a kind of asymmetry, a disadvantage in terms of power and status in the university and in the world. But I think you need to go, it does need to go both ways. And I think that's something that we're just in the academy trying to start figuring out how to do. I'll add one small thing to that since Barbara Gross isn't here. To, Ben talked a bit about CS curricula at Harvard, Barbara, and Alison Simmons who, so Alison is on the philosophy side and Barbara's on the computer science side. They've partnered together to embed ethics into computer science and they've been doing it for about three years. So it'll be interesting to see that unfold and also to see what industry picks up from that as well. Hi, my name is Laura. I just wanted to ask about something in connection with that. Alrighty. I come from the philosophy department and when I said I wanted to write about algorithms in 2012, the entire philosophy department just said that's not philosophy. So I understand where you're coming from but what I was sort of wondering is do we also need to talk about what is not solved just by putting people in the same room? Because what seems to me to be sort of urgent thing is the language barrier. And I mean, how do we bridge that gap? It's just people just do not understand each other. My sister's an engineer. I asked her to explain the algorithm to me and she just looked at me and said, I can't explain one. I can make one. Do you want one? I mean, that's as far as we got. So I was just wondering about that. Yeah, you've actually raised the point that we also discuss extensively in our group is how much is missed just in different professional languages speaking past one another. And I actually think that there's a little bit of a like full cultural immersion version of language where when we are just forced to sit in a room every Tuesday over the course of a year, you start to gain the comfort to be like, hold on, backup, you just used the word and I feel like you used it in a way that is a highly formal definition in your language. What does that mean? And that takes time and it takes like a regular meetings and it takes trust. But I do kind of think the immersion is just having to sit in a room and flesh out what those languages are. One, makes it clear where those language barriers are happening and two, creates the space to sort of start to break those down. At the same time, expanding the scope of what those language barriers can look like because I think that we are predisposed to say, oh, I can recognize that a critical race scholar is an expert in something probably critiquing and race and I can recognize that a social scientist is an expert in studying society but also sort of to get to a point that generates the experience, so experiential expertise. And who else needs to be in that room who's speaking a language that we don't hear, that might be all of the stakeholders that are not being considered when we do things like violate their privacy rights massively. Does anyone want to add something? I think the other way to think about is that there are many formats the language takes and we have conversations, we read, we write and language exists across a lot of different media and it might be difficult to communicate when you just restrain that to one, especially when two people speak different kinds of languages but in addition to this kind of immersion that Salome was talking about, like being able to see things together and being able to translate across different kinds of media I think helps increase that likelihood of understanding. I've spent the last almost six years at Microsoft Research in Cambridge which has majority computer scientists. There are three people who come from my background. I was trained in Native American Studies and Anthropology and I'm now in a computer science lab and I'm also based in a computing and engineering school. It has taken me that long to be able to have a conversation with my colleagues and understand in a rudimentary way what their taste and problems might be and I wanna connect this to the prior question about ethics is that we learn by doing and I know it's incredibly attractive to imagine we can read it and know it or have a set of principles and follow them but things like ethics are something we live and we do. It's not something we learn in the abstract. We have to be able to apply them and see how they change when I'm interacting with a different set of people. So the biggest challenge is that we need to do the same practice of learning when it comes to ethical principles and then what do they look like on the ground? We need to take the exact same approach to being able to understand these otherwise siloed approaches to building the world and building systems and see that they are always social. They're always gendered, they're always racialized, they're always classed, aged, able, like all of the time and that we're just beginning which is very exciting. Like I'm actually incredibly hopeful. We have never really sat down and thought, wow, when we build systems, we're always also building social worlds. So what does it look like to think about it that way and train in that way? The reason it hasn't changed, we haven't even tried that yet but let's try. Sure, I was just going to put in a plug for another, my other discipline which is science and technology studies. Because science and technology studies is explicitly about those kind of translation works and we should have more science and technology studies departments and have science and technology studies be something that engages across a lot of different fields. What's your training? I'm in science and technology studies. Like Jenny, I would also like there to be more jobs in this field. Hey, hi, I'm Anjali. I came here as an undergrad wanting to study the intersection of computer science and government and spent far too long, much like Ben and Jen, being told that those two things don't fit together and that I should pick. It took me four years to figure out that there were already a lot of academics doing, like answering the questions that I was interested in, in fields outside of those two departments. Like Luke mentioned in science and technology studies, information and library sciences, the history of science. And so I'm wondering what are your suggestions on finding people who are already working on the questions that we find interesting in perhaps departments outside of ours and how do we walk that line between molding our respective fields into something that is open to answering our questions versus going somewhere else where they may already be doing that? Yeah, I think that that's a really important question, something that I spent a lot of time trying to figure out. For me, it was a lot of sort of seeing the limits of what sort of the people right around me were doing and then realizing that, I mean, one of the amazing things about Harvard is how many other people are doing things outside of the engineering school, so I wasn't out of place where there was no one else. So I sent a lot of emails, knocked on a lot of doors, talked to a lot of people who seemed to be doing interesting things and eventually found people who I wanted to work with and sort of that's what led me to the Berkman Center. So I think there's a lot of outreach you have to do to get there. And the other thing, and this sort of touches on the previous question as well, is that it's really hard to do multiple things at once because especially, this is more true of a PhD than an undergrad experience, but you have to have sort of a home discipline and one of the things that's hard about doing multiple disciplines is that you sort of are splitting your time and for several years, you just feel like a really bad version of several things and it's only after several years of sticking through that that you realize that those like collection of expertises, even though neither in no individual field or maybe view you as an expert, you have an expertise in the intersection of three fields that actually is even harder to come by and more rare and more valuable. So but it's hard because right now there is such a lack of sort of guidance along those lines and I sort of tell people who wanna do this like you're sort of, there are now a couple other students, but you're sort of on your own and it's hard to realize how many people who are in shoes like yours or mine who maybe don't make it and they just pick one field and never get to pursue the work that they think is more important. So yes, there's like a need for students to sort of just cast off on their own and have this journey, but I think we shouldn't be making it that hard and we shouldn't be relying on students to do that, to do this work. This goes back to my answer to I think it was Jasmine's question. I think it's great there's lots of different interdisciplinary groups doing a lot of this work. I think there's a lot of responsibility on the computer science departments and the engineering departments to pull everyone in. So I'm positive you are going to go and lead engineering teams. I know you will, Anjali, and it's for people like us to really bring people in because the other fields are knocking and they're saying, hey, let us in. We want to impact the space and it's really on a lot of us in computer science and the engineering departments at companies to really bring those folks in and just say, hey, it's important for my team to bring these disciplines, I'm gonna do it. So that's all great advice and I think that it can really be a struggle, particularly when you're trying to decide what your interests are. So that can be the first step to kind of limit down who can be a resource for you. But I would suggest joining online communities and listservs and going to the conferences that are available in your area. And people can kind of direct you towards people who are at your institution or you can work with even at other institutions. And it also helps to just feel like you're not the only one who has these cross-disciplinary interests. I also think that there's value in looking within each of those departments and seeing who's open to being on a committee or helping you with coursework who specializes in something that you're interested in but they don't cross disciplinary boundaries that way. For instance, in your interesting government, if you're specifically interested in studying corruption or inequality or gendered influences on policy or things and that overlaps in some way, find people who do that and are open to bringing it into this new domain. And I think you can get a lot of unique knowledge that you can't find by only talking to people who have committed to this idea of going beyond in disciplines because there's value to be had within those specific domains as well. Because this is recorded, I'm going to take the opportunity to say organize. Before you graduate, get at least five of your colleagues who feel the same way and go to your departmental head, go to at least three other faculty and as a group say, I really felt like I needed to learn this and I didn't have it offered to me in this course. I'm gonna look for my department to do this after I graduate. You'd be shocked at how powerful you are as alum and I think the two strategies of having companies call out schools, we're not getting the students we need trained properly and students before you exit to say that to your home departments, incredibly powerful. We heard a good cry for the need for critical race theory and teaching in that area but is that unique? Is there, are there other topics that a well-rounded engineer or a well-rounded scientist needs to have in order to be part of the human race? So the question is you're asking for other courses besides critical race theory though, right? So I mean, just I come from a critical background so intersectional feminist theory is something, another course that I would like to see. I did focus this presentation particularly on race because I think that race gets often overlooked or people sometimes aren't comfortable talking about race. That's why I keep trying to push us to talk about race but I would like to see more critical perspectives along the lines of feminism as well and I think there are other courses that maybe other people have taken on this panel that they would like to see too. It's the problem though and the reason again why I focus on critical race theory being systematically required is that there are, there's room for electives but if you leave it for electives how many programmers would take those courses? That's the reason why I'm really pushing for across the board systematically requiring a critical course because I mean, everybody in this room has had experience with some sort of technology where they realized that they were not in the mind of the designer on the tail end, on the user end and so let's fix that. Thank you for your presentations. My name's Laura Leichkeli. I'm up from DC today. I'm at Georgetown at the Beck Center on social impact and innovation. I work directly with members of Congress in their districts on building technology for democracy in the 21st century. There's a real need for systems thinking in Congress. You could say, you say it's built as a complex adaptive system that is not adapting. Somebody had the Facebook hearing up from last week. My suggestion for everybody in this room right now is you have a huge opportunity to surge the ethics and technology questions into the thousand people running for Congress right now as a campaign issue that then can be followed up on in November and later. We're gonna lose 800 years of institutional memory in November just based on retirements right now. There's some real downsides to that on subject matter expertise, but in terms of process and systems, it's a huge opportunity. So what I would do is everybody in this room who is from somewhere else, it's not like Massachusetts is the problem here, but there are ways that you can make yourself obvious at home simply by going in and having a conversation about the fact that this new shiny thing in tech is about ethics, even though we should start with ethics. Every member of Congress crosses every single one of these issues at some point in their life on policymaking or institutional responsibilities, but the other opportunity you have here at Harvard is the boot camp for new members of Congress at the Kennedy School in November, December, January, any of the off-sites and just make sure somehow this is included and if they don't let you on the agenda do an un-conference or do some kind of other informal organizing in the periphery of the actual training. I'm sure you can hook it up if you start now. Sad time.