 Hi everyone. Thank you so much for joining us today. My name is Bandy Singh and I've been spearheading OTI's work related to algorithmic fairness, accountability and transparency. Over the past year, OTI has published a series of four reports which look at how internet platforms use algorithmic decision making for a range of purposes, including content moderation, the ranking of content in search results and newsfeed, ad targeting and delivery and when making recommendations to users. Today I'm very excited to be joined by a great lineup of panelists to discuss the subject of our third report, which is how we can promote greater fairness, accountability and transparency around the use of algorithmic decision making and ad targeting and delivery systems. We will have some time at the end for our panelists to answer questions. So if you have any questions, please use the question and answer functions and we will do our best to get back to you. So now I'd like to briefly introduce our panelists. First we have Joe Westby, who was a researcher on technology and human rights at Amnesty International in the United Kingdom, where he focuses on the human rights implications of big data, AI and the power of tech. He is currently leading Amnesty's emerging program of work tracking the systemic threat human rights posed by the surveillance based business model underpinning the internet. Next we have Lindsay Kerr, who is a Democratic staff director and chief counsel on the Senate Rules Committee under ranking member Amy Klobuchar. She works on a broad portfolio of issues related to campaign finance, election law and national security. Next we have Morgan Williams, who is general counsel at the National Fair Housing Alliance, where he is responsible for leading the office's strategic and tactical legal initiatives and affairs, and where he directs NFHA's efforts to pursue pioneering litigation under the National Fair Housing Act. And last but not least, we have Natalie Marshall, who is a senior policy analyst at ranking digital rights where she leads RDR's policy engagement and policy development efforts. Natalie led the expansion of RDR's corporate accountability indexes methodology to address human rights risks associated with tech companies, business models, with a particular emphasis on the role of targeted advertising and algorithmic systems. Thanks for joining us. I wanted to first kick it off with a question for Joe. So in one of your latest reports titled surveillance giants, how the business model of Google and Facebook threatens human rights. You've outlined how control over our digital lives is one of the primary human rights challenges today, and it is something that we would never tolerate from a government. Can you talk a little bit about how we got here, especially with regard to the use of algorithms in targeted advertising. Thanks and thanks for the opportunity to speak with you today. So yeah, we looked at the business model that underpins the data economy, what some have called the original sin of the internet, which is, of course, the tech companies drive profit by knowing more and more about our digital lives in order to sell more and more finely targeted ads. And I think it's now widely recognized this model is broken. We, you know, analyzed it from a human rights perspective and we argue that this business is because it is predicated on surveillance ubiquitous and invasive surveillance of people's digital lives. It is a threat to human rights on a vast global scale, really affecting billions of people. And the rights that we're talking about here is, of course, privacy in so far as this business model is really the opposite of privacy. It relies on knowing more or less everything about you and when you operate in the digital world and using that to target you with with with ads. There's also this now knock on effects that have been widely documented on a whole range of other human rights, including non discrimination freedom expression, even freedom of thought. And this is all linked to the ways that tech companies have the power to kind of shape and control our information environment and influence our thoughts and behaviors. This is the use of the these this model on on their on their platforms. And we, I think we, you know, we've now seen these harms examples of the harms and the kind of knock on effects over and over again, you know, micro targeting of a political messaging in elections, you know, amplifying hate speech in some cases leaving to actual violence, enabling discriminatory targeting and illegally targeting children and collecting their personal information. This is all kind of comes back to the same problem with the ad driven business model underpinning these these platforms. We focused in particular on on Google and Facebook, not just as pioneers of the model but because of their dominance over not only digital advertising but over the the the global public square and the channel channels that we rely on in the modern world to engage with everything in the digital world in a search social media messaging video smartphones. And this goes hand in hand with the business model and it has the business model has both enabled these companies to to grow and obtain this this dominant position so it exacerbates and amplifies the the harms that we're seeing. So, you know, I think to a question actually just if I may really briefly on the question of kind of how we got there I think it's really important to look back at how this has evolved over the past two decades, because what's the point where we just accept this is the way the internet works, but we need to remember that firstly this the internet wasn't always rely on this business model and there are alternatives. And secondly that the problems that we're seeing now are entirely predictable. And almost two decades ago, the privacy campaign group epic spoke at the Senate warning of the long term implications of profile based advertising and the need for strong privacy protocols. And the fact that those warnings weren't heeded is why we are where we are now and that the self regulation has failed. And really we need to now look at how we kind of changing this this business model and putting a stronger oversight over the the the ad targeting algorithms, targeting delivery algorithms underpinning these platforms. Thanks Joe. Next I want to turn to Natalie. Natalie in your recent report series you talk significantly about the influence targeted advertising can and has had amplified through for example, Russian interference, and socialization and missing advertising. Talk about what specifically makes these algorithmic and systems so powerful, and why our focus needs to be on the business models of these internet platforms rather than just the content. Thanks Sandy and thanks so much for having me. So I want to start by by elaborating a bit on what Joe said about how the how the targeted advertising business model influences the way that the rest of the platform the part that makes users that makes users walk to the platforms, because I'm very sorry platforms but nobody's coming to your dear platforms for the purpose of finding targeted advertising that's just something that we kind of have to sit through to get to to post from our friends and family. And you know whatever else we want to engage with there. And deliver targeted advertising and to make a lot of money from it you need two things you need a lot of data about your users both who they are, and what they do. And to you need your users to spend as much time as possible so you have as much eyeball space, so to speak, to rent out or sell to your to your advertisers. So that creates a set of powerful incentives that have led the social platforms that that that we know and love or love to hate or hate to love, perhaps, and how we how we engage with them. So the comment so the, the, I think most of us by now are pretty familiar with the idea that engagement, meaning how much, how much eyeball time spent on how much you click how much you read is is is used as a product as the as the the metric that that company is used to to determine how much ad space they have to sell in a way and so everything about how the platforms work is designed to drive engagement. So that's why platforms want to show you content that they think you're more more likely to click on more likely to read more likely to have an emotional response to. And those are things that can be measured by computers. On the other hand, there's a whole lot of other things that I would argue are much more important. That can't be measured by computers. So how true how factually accurate something is how beneficial to society, something is whether something is a public service message message, perhaps about public health, or if something is, I don't know, sports score or the latest celebrity cheating right. A computer cannot tell the difference between these things that's something that requires human judgment. Because companies aspire to aspire to operate at massive scale, they have to rely on this kind of automation, which means that not only that the things that can be measured again by machines, end up being the things that count. Again, I would argue truly count, don't get measured and are at best after thoughts in, in the calculation of what content gets gets amplified and gets shown the most at worst gets dismissed as something that's entirely subjective and therefore not real and therefore not true. Right and that's where you get the discourse around companies not wanting to be the arbiter of truth. It's not because it's not possible to to to discern truth. In many cases, there there is a difference between actual facts and complete nonsense right. But it's not something that can be easily discerned by machines or something sometimes not not discernible by machines at all. The second bit and this this is where the data part comes in is that in order to to to to to in order to be in order to charge the highest rates you can as a company you need to be able to tell your advertisers that you have access to many more eyeballs and anybody else. So there again is where the scope and the reach comes in right and the need to rely on on automation. And you also want to be able to tell them that you have the best data to help them reach exactly who it is that they want to reach with their message. You know there's an old and old saying from from the Madison Avenue hey days of advertising that half of that all advertising money is wasted. The problem is that you don't know which half the the the promise of targeted advertising is to allow advertisers to know exactly which half is being wasted so that you're only showing detergent ads to people who actually buy detergent as opposed to the members of the household who have nothing to do with with detergent purchases that you only show your political ads to people who are registered voters in the correct jurisdiction and who are either persuadable or you know likely to turn out for your party but you just need to you need likely to vote for your party but you need to motivate them to actually take the the step of voting. So what's what's the what's the downside of all this well the downside of all that is that first of all much of this data and I would argue most of this data is collected in illegitimate ways, either because people don't know that the data is being collected that it's being collected in a coercive manner that you can only use the service that you need in order to participate in society by allowing this data to be collected about you, or it's then used for purposes that are not the purposes that you agreed to. So one example would be when you when you get your cell phone number to a platform to use for two factor authentication, which is a way of adding another measure of security to make sure that people who are not you can't hack into your account. And then that cell phone number is used for for your advertising profile or is sold to, you know, a host of shady actors who then use them to, to, to, to, you know, call you and try to sell you things or for for robocalls. That's that's that's a breach of the, maybe not a in the sense of breach of contract but certainly in the in the in the moral sense. And so because so much of this data is acquired in an illegitimate way that means that anything that you do with it is fruit of the poisonous tree to use to use a legal framing. So it so it's illegitimate but but then all in addition to that, and that I think should be reason enough in my view but as Joe was saying in addition to that it leads to discrimination harms and that's something that Morgan will talk a lot about I'm going to go into the next of his organization's lawsuit and the settlement but I won't so I won't spend too much time on that. But, but it also leads to violations to to freedom of speech to access to information and to to freedom of thought, as Joe said, so you know so I think we have to think about what the that not just the specific harms that come from the that they are real, and Morgan's going to talk about that. But we also need to consider the the impact the set of incentives that that creates and how that leads companies to create content curation algorithms the ones that decide which which of the many posts you could be seeing, you end up seeing and in what order, and how that in turn impacts the way that you perceive the world and the way that that that you operate within it. Thanks Natalie Lindsay I wanted to turn it to you now. Since the 2016 US presidential elections and which we saw foreign actors influencing the elections through the use of social media platforms and their targeted add systems. Senator Klobuchar who you work for has been a fierce proponent of expanding disclosure requirements for political ads online. Could you talk about the honest ads act which has been the main vehicle for these efforts, and how it encouraged greater transparency and accountability from platforms. Sure, thank you, Spandi and OTI and for all the other panelists for hosting me today and participating. As you mentioned, Senator Klobuchar is the top Democrat on the Senate Rules Committee which has jurisdiction with a little election. And she joined the committee not long after the 2016 election, but we know Russia so recently interfered in our democracy. And so the top priority for Senator Klobuchar since she joined the committee is to secure elections and to stop about the spread of disinformation online. And out of the gates, you know, the hearings of the hearings after 2016 about exactly what the Russians did. And one of the things I think how rich people so much was that we know that Russian operatives bought ads on Facebook with Google. And so we got to work very quickly after learning, learning that to find a way that we can build more transparency and accountability on online platforms, specifically related to online political ads. And one of the things that we came up with with the Onset Act, which is a very simple legislation that says if you sell political ads online, whether they be candidate ads where you're advocating for organesies, some of the election, or an issue ad, which is really important because we know that organesies after a huge issue ad to so to so just put in a vision a lot and they target those ads specifically. I think in 2016, the Senate intelligence committee said that African-Americans were the most targeted group in terms of disinformation for the adversaries. So the issue ad is a really critical piece of that. So if you're selling issue ads on your platform or political ads, those ads have to have a clear disclaimer from them. And you have to have a disclosure ad where the public, academics, journalists can go and easily access them. That is consistent with what the law that we have in the book now for ads that are sold on TV, radio, and print. And when those laws are written, the proliferation of online ads will go on. And so our technology has taken off in terms of the number of ads that are sold online, billions and billions of dollars, and our laws have not caught up. And so the honest ad that puts online political ads in the same thing as other ads is a really simple bill. It has bipartisan support. So when we first introduced it in 2017, right after the election, the late Senator McCain was our lead Republican on the bill, which made sense because he was, you know, the architect of McCain's fine bowl, McCain's fine bill, and he was very passionate about disclosure and disclaimers for campaign ads. And Senator Warner is our Democrat, Democratic partner. He's the Vice Chair of the Senate Intelligence Committee. So he understands probably more than anyone. The, what exactly for an adversary is you're doing how they're using platforms and then doing them to undermine democracy. And when we reintroduced again this year, Chairman Graham, who's the chairman of the Senate's fishery committee is the two Republican sponsor of the bill. And I think, you know, most people when you tell them what the bill does. It seems like common sense. I think if you put it on the Senate Board today, it would ask, already asked the House, the Republican leader doesn't support the legislation. So that's our roadblock, but it's a really simple piece of legislation. And now with what we're seeing, just the sort of the manipulation and the craziness that we're seeing online. And I think a lot of people would say that the Honest Act is an absolute basic necessity and something that we should build off of in terms of transparent and accountability for online advertising. And I'll say, you know, when we introduced the legislation, we did work closely with some of the data platforms to see, you know, how it could work. And to Natalie's point, because a lot of ads, all the ads sales are done without human interaction, it is difficult. And the platform initially said, we don't think this is, you know, at least entirely something that we can implement. And then quickly realize that it could. And platforms like Facebook and Twitter have begun voluntarily implementing the Honest Act. So they do have disclaimers. There is an ad library. None of it is perfect. It's not, you know, complete, it's not complete implementation. And from Ranking Member Publishers' perspective, I think what you'll, what she would say is, you know, it's great that they're voluntarily doing this. It is absolutely not a substitute for passing the bill. And we cannot not trust these companies to stop this. I think that's really, really self-evident. So I'll stop there in terms of explaining what the bill does. Thanks Lindsay. And Morgan, I'd like to turn it to you next. So in March 2018, NFHA and three of its member organizations filed a lawsuit against Facebook alleging that the company's ad platform enables landlords and real estate brokers to exclude people of color, women, people with disabilities and other protected groups from receiving housing ads. The lawsuit resulted in a settlement that drove a number of changes across the company's ad platform. Could you talk about the scope of the lawsuit and what kinds of changes it resulted in? Sure. Thank you so much for the chance to be here. The National Fair Housing Alliance is a national office that works to ensure compliance with the Fair Housing Act. It's based in DC, but it has a network of local offices that are our membership across the country, local fair housing centers. And it's with three of these offices that we carried out an investigation of Facebook's ad platform and ultimately pursued federal litigation in this other district of New York. And in regards to the alleged discriminatory conduct of the platform's operations in the algorithms that manifest the targeted categories that were used in the ad targeting features, as well as other concerns associated with the operation and the ad and its delivery functions. In short, in actually in November of 2016, ProPublica ran an ad article featuring Facebook's ad platforms and the extent to which you can target those ads based on ethnic affinity. And essentially that was a term that Facebook advertisers used to refer to race and racial targeting of advertisements. And so there was a significant outcry as a function of this. We heard from our members across the country from our federal legislative partners in regards to concern over this and immediately reached out to Facebook. To get a letter noting concern over what was identified in the publication and what we had identified in some preliminary investigative work as a function of the outcry. Facebook was responsive and asserted that they would be changing their operations we've met with them several times in the following weeks. And that February of 2017, they noticed that they were they were changing their platform to remove these discriminatory features in their in their targeting operations. The following fall public were in a second story that showed that Facebook could not in fact change its operations. And the same as the ProPublica ran before they were able to to run and at that point we shifted our gear from sort of shifted our focus from advocacy with Facebook to enforcement oriented investigation under the Fair Housing Act. Very strong organizational standing or the ability for private housing offices to pursue investigations and enforcement of discriminatory conduct in the housing market. And Fair Housing Act is very clear that it's illegal to express the discriminatory preference in advertising and it's not only illegal for housing providers or housing service providers to do so, but for publishers themselves to publish those discriminatory ads. And so there's a long history of enforcement against publications. Nafa, in fact, in 2009 brought a lawsuit against American Classified, the largest classified company classified advertisement company in the country at the time for both print and online ads. And we were involved in amicus engagement in the 2009 litigation in both the Craigslist.com, 7th Circuit Decisions and the roommates.com 9th Circuit Decisions. The first set of 9th Circuit Decisions they went before the 9th Circuit in that case and dealing with the issues of the scope of the Communications Decency Act immunity to housing discrimination claims in in those cases and in subsequent cases. And we then and since have long adhered to the to the principle that the Fair Housing Act should not be relegated to print advertising and should apply online. Just as it does to print advertising and the immunity of the Communications Decency Act should not apply to housing discrimination. And in fact, in our litigation against Facebook in the motion to dismiss briefing that was filed, they asserted communications decency act defenses. And we briefed our responses to that. Notably, the Department of Justice filed a statement of interest or amicus brief and support of our arguments around the application of communications decency act immunity and application to Fair Housing Act Facebook operations. Because we settled that legal decision was not rendered on those open questions, but there is currently other briefing that's pending in other litigation in which Facebook is similarly asserted this defense in which there may be further case law on this question. In any case, we did reach a settlement with Facebook in which in partnership with a couple of other pieces of pending litigation. In involving the employment sector in particular, in which Facebook agreed to change their housing ad platform to preclude housing ads that are in regards to housing. Credit or employment services to not allow those advertising advertisements to engage Facebook's targeting ad platform in a number of notable ways. One, it drastically limits the targeting features to a set of targeting options that are jointly agreed upon by the plaintiffs and Facebook. Two, it restricts a lot of the targeting features around geography and other things that you could use to engage in discriminatory, segregating advertising practices. And three, it limits what was referred to as Facebook's lookalike feature to a now reconstituted special ad audience, which removes a lot of the filtering features on Facebook's ad platform tool that allows advertisers to target existing customer base or identified pool of individuals. We still have outstanding concerns about the reconstituted special ad audience to the extent that it's limited to targeting users on the basic basis of like internet usage. And we think that there's probably a lot of instances in which that's problematic. And under the settlement, Facebook specifically agreed to study that operation and the discriminatory effects of that operation and confer with us about further changes to that part of the platform. It's no worthy. And I'll stop here. Definitely ramble on further about this and really welcome any questions in particular coming out from listening to the roundtable discussion that preceded this is very compelling and some no early, I think relevance to some of the work in this case, but would just add that there's a HUD investigation that was reopened in part as a function of the lawsuit that we have filed. But it was a secretary initiated complaint that HUD issued a charge on in March of 2019 shortly after we announced the settlement of our of our lawsuit, which referred the matter to the Department of Justice for enforcement. And then considering kind of outstanding concerns around Facebook's operation, I would say that they're principally twofold. One is in regards to the potentially discriminatory function of their special ad audience. Feature has been reconstituted and two is in some of the research that's been published more recently since our settlement on the delivery function of Facebook's ad platform. And the extent to which separate apart from the ad campaign that is programmed into what is launched into Facebook's ad platform that the algorithm that Facebook uses to sort how those different campaigns are prioritized and delivered to viewers. Excuse potentially skews results in a discriminatory way and there's outstanding concern about that. The Department of Justice is not filed suit. So, ostensibly, it's likely that they're in some form of settlement negotiations. And it may be that they're focused on these two outstanding issues that are subject to concern. Thanks, Morgan. So now I would like to open some questions up for the panel as a whole. So as we've discussed the use of algorithms for ad targeting and delivery purposes can perpetuate harmful outcomes biases and discrimination. What kind of mechanisms can be leveraged to reduce the instances of such harmful outcomes and promote greater transparency and accountability around the instances that do take place. And some examples that I know we've all discussed and mentioned in some of our reports are algorithmic audits and impact assessments. So I'd love to get your thoughts on that and maybe first I'll turn it over to Natalie. Thanks, Bandy. So I think there's there's three main areas of intervention that that I think are particularly fruitful for the US Congress to pursue. The first one is mandatory transparency that goes that, you know, I think cannot can and probably should start with the honest ads act, but going beyond that to apply to not just political ads but other types of ads in general. And because I think I think if among the many, many lessons we've learned over the past six months of life in the time of coronavirus is that political ads is not the only place that harmful disinformation can can circulate right. So that's that's the first place, but also looking at transparency around how all content both paid content like advertising and user generated content is governed on the internet. This is very different from having the government set the rules. Right. We're not that is that is not an appropriate thing for the US government to do that in fact would be contrary to the to the First Amendment. But I think it is appropriate for the government to require certain measures of transparency about what the rules are how they're enforced. What the appeals process is because there should be an appeals process, because no no company is going to get it right every single time. Right. That's not what we should aspire to we should aspire to having a functional process and then publish numbers about about the outcomes, how much content is taken down what type of content through what mechanism, etc. The second place where I think and that is think is going to be the most, possibly both the most difficult to negotiate but also the most impactful is federal privacy legislation. You know there's been a lot of talk recently in Washington is likely to continue to be about reforming or possibly even eliminating the communications decency act. I'm very sympathetic and agree with most of the goals that are being pursued through this avenue but I just don't think that the that intermediary liability is the way to get to that. Perhaps clarifying, you know that what was and was not the original intent of the CDA because it there is a growing legal consensus that many of the court, the court decisions that Morgan referred to were decided in ways that were just not consistent with what the CDA was actually meant to do. I think that that's something that's that's worth talking about but I think more importantly than that the way to prevent these harms from happening is through privacy legislation. We have a framework in in our latest report series. I'm not sure whether we're able to share the link through through a chat here within zoom or not, but if not I'll certainly be tweeting it out. That outlines what we think a federal privacy legislation should should should focus on so that the types of harmful discrimination and illegitimate data collection that that we've been talking about can't take place. And then the third piece, which I think is probably the wonkiest and perhaps least sexy to talk about has to do with corporate governance reform. It has to do with how companies govern themselves. There's been a lot of talk about how the CEOs of Facebook and Google in particular have these dual roles of being both CEO of the company and chairman of the board of directors and on top of that, you know, a dual class share structure that that's that's very popular in the tech sector. They have more, they control more more votes than anybody else which puts them in a position of essentially being dictators of their companies and are completely immune from the type of oversight that a board of directors, or the fact that they have shareholders right in our publicly traded company is meant to do. So I think there's there's a lot of innovation to have there. There are actions that entities that are not the US Congress should be pursuing. You know, I think antitrust is and I know that there's a there's been a house judiciary hearing just announced for later this month that we'll focus on but that's, I think that I think at this point is mostly out of the hands of Congress and in the hands of antitrust regulators. So I think I think those are the, but it is it is indeed worth pursuing, just on a separate track from the legislative options that I just mentioned. Thanks Natalie. And would any of our other panelists like to jump in. Yeah. Let's go with Joe first. Okay, thanks. I'll just be very brief just, you know, certainly we welcome all of the recommendations that Natalie has just outlined just building on that for a couple of others from our point of view. I mean, I think going back to this, this, this point around platform power and the the dominance of a handful of tech platforms. I think, really, we need measures that will, that will tackle that and encourage a more kind of pluralistic Internet. So, antitrust could potentially be one of the tools in the arsenal of states and regulators and interoperability as well I think is an important technical standard which should be which should be much more greatly enforced to enable others other platforms to develop and then I think one other thing which we think is really high priority is really to make it lead new legally binding protections to stop platforms from forcing users to accept this surveillance business model because currently how what we see is that the effectively users have no choice but to sign up to this kind of invasive profiling and targeting in order to use services which which everybody now relies on. So really you have a false choice where either you don't benefit from the modern world and the digital world, or you consent to this model which is abusive of human rights. And so I think that kind of needs to be a first step the government should look at in so far as people should have the choice to opt out meaningfully from these kind of practices. So, and Morgan just before it over to I just want to remind everyone in the audience that if you would like to ask the question to use q amp a function. And now Morgan onto you. Thanks just a brief comment in regards to the question about tools for accountability and transparency in terms of civil rights enforcement and accountability for civil rights protections one of the most important tools that we have at our disposal is disparate impact liability which is separate apart from intentional discrimination claims but are involved claims in which a policy that's neutral or non discriminatory in its on its face when put into practice has a discriminatory outcome. And that can be a very powerful tool and talking about algorithmic operations. Unfortunately, well that's a very powerful tool that's at our disposal HUD in 2013 issue the rule that provides a lot of common understanding for what a standard of disparate impact liability should be unifying a lot of the different circuit analysis that is out there. In 2015, the US Supreme Court issued a decision in Texas Department of Community Affairs Housing Community Affairs versus the inclusive communities project that upheld disparate impact liability under the Fair Housing Act. And it's actually on the basis of this Supreme Court decision that the current administration is actually proposing a new disparate impact rule. That would completely upend the ability to bring different impact cases and in particular has a specific defense regarding any practices that are associated with algorithmic operations in which the inputs used in the algorithmic model are not themselves. Substitutes are close proxies for protected characteristics and the model is predictive of risk or some other objectives and anyone who knows algorithmic models knows that that's an absurd defense because it's not individual inputs that contribute to the output of an algorithmic model but the relationship between different inputs and so such a defense. Really would provide a kind of safe harbor immunity for any practices that are based on algorithms. This is a proposed rule that HUD accepted comments on this past fall and a number of folks in the tech civil rights world. We're very responsive to and really appreciate everyone's engagement on that in partnership with the the Fair Housing and broader civil rights world. HUD sent a proposed final rule to OIRA, the agency responsible for securing final interagency review of that final rule in early May, May 7th, and is poised to issue that final rule any day now. So please be on the lookout. I think if you are interested in engaging in advocacy on this issue, there's the opportunity to actually schedule meetings with OIRA, the agency that's coordinating that interagency review and confer with them about your concerns about at least what was in the proposed rule. We don't know what's in the proposed final rule now. Additionally, we'll be looking to challenge that final rule on the basis of potential Administrative Procedure Act claims and there may be avenues for some folks in this space to be potential litigants and challenging provisions specifically that deal with algorithms. Additionally, we'll be working with any future Administrations to rescind these terrible new standards that are being put out there to roll back to the rights protections in this space. Thanks, Morgan. Lindsay, would you like to jump in? I'm sorry, Lindsay, would you mind coming a little closer to your screen? It's hard to hear you. Sure, is that better? It's a little better, yes. Thank you. You know, I think one of the things that Natalie said she talked about Section 230 reform, and that's something that is being discussed heavily in the House of Congress right now. I think that there are multiple Senators on both sides of the aisle who are very interested in seeing sort of different bipartisan bills crop up. Many conservatives are motivated by the myth that the algorithms are biased against them. And I think that on the Democratic side, we're motivated for all the reasons that we're discussing here right now. And, you know, I think that's something you're going to see more of. I know Vice President Biden has made clear he supports the reform and many others. I also say, you know, I think when we have regulators like the FTC issuing fines for long doing and lawlessness on behalf of some of the platforms this year ago, we had the FTC decision about Cambridge Analytica and the $5 billion fine, which we could find ever, but, you know, when you make $55 billion a year, it's nothing. So I think having, you know, having regulators really, you know, to compete behind some of the fines for the person is a long way as well. Thanks. Turn it now to audience questions. The first one, what should multi-stakeholder collaborations around promoting accountability around digital advertising algorithms look like? In particular, how can industry, government, civil society and civil rights groups that are engaged, and what are like the main challenges that you see in trying to push this kind of work forward? Maybe I can turn it to Joe first, if you want to take that. Yeah, sure. I mean, I think that there is a lot that can be done in the short term through kind of multi-stakeholder collaborations. I mean, I think following the previous question, ultimately this is going to require state-based regulation and binding laws, self-regulation has failed. But I think at least in the short term there are discussions that can be had between affected kind of communities or representatives affected communities and the big platforms in order to put in place better kind of systems and mitigation measures that the companies can adopt. I mean, actually, I think that the Stop 8 for Profit campaign, which is obviously very live at the moment, you know, I think that's a good example of, you know, there's a lot of issues there which overlap with what we're discussing and which really tie back to the problems of the business model. And there's some really concrete recommendations that the campaign has, like putting in place a C-suite executive with civil rights expertise in Facebook. And I think those kind of steps would be really important and really impactful until we can put in place these longer-term laws and regulations that actually protect people kind of across the board. Thanks, Joe. Yeah, so I think, you know, I think there's been an evolution of the civil society and multi-stakeholder discourse around platform accountability. You know, I think the early years of social media platforms in general were filled with a lot of optimism around the positives that these new tools could bring for activism, for social movements and more. And, you know, starting around, you know, the middle of the past decade, I think there was a growing realization of the harms that could come to movements and to activists and others from relying so much on these platforms and what happens when, you know, an activist content gets taken down and how can you get it restored and how can you appeal and so on. And then I think after, you know, I think the year 2016 was a big wake-up call certainly for social movements in the West, you know, with the twin tragedies of Brexit and the 2016 US election. And yes, I do view both of those events as tragedies. And because they were just such clear self-owns by two of the world's most powerful democracies. But I think since then we've all collectively spent a lot of time reflecting and analyzing and looking at data to better understand exactly what happened. And I think that that understanding exists there now. It's disputed to be sure, but I think we do have a solid collective understanding of what happened. And I think now we're – I think now that what's happening is the building of a consensus around what needs to be done. And I've been – I am encouraged by the progress that's been happening over the past few years. You know, the understanding that platforms are – have too much power and that they're unaccountable. And now the debate that I see happening is on exactly what the tactics are for getting there. And I do think there's a shared vision for what the outcome needs to be, right? A technology sector that is more responsive to democratic oversight, to human rights, to the needs and interests of the public and not just to their own bottom lines and their shareholders' pockets. But now we're talking about what tactics will be needed to get there. And I for one am hopeful that that's a conversation that's going to mature very quickly, especially if new channels for activism emerge after the 2020 election here in the U.S. Thanks, Natalie. Morgan or Lindsay, would you like to jump in? Okay. So the next question we have is for Morgan. How do you think lawsuits, such as the one that NFHA helped lead, can play a role in promoting greater accountability around the use of these algorithmic systems? And what are some of the key lessons that were learned in the process? Yeah. Well, I think lawsuits can play a very important role in changing the practice of individual operators, but also in informing the market more broadly about where prospective liability may lie. Just recently announced some changes that Google was making to its platform in regards to its targeting features in the housing space and in particular around the use of zip code and other geographic targeting features as they may be used in the housing sector. And that kind of enforcement can help to educate the market about the scope of that kind of liability. Additionally, there are open questions about the scope of the relationship between the Federal Fair Housing Act and the Communications Decency Act. And those questions may be decided in some fashion through legislative means down the road, but there are open questions that may be decided in the courts in future litigation. And though our case settled and those legal questions were left open, there are other cases that remain and other litigation that we may pursue against other parties that may help to sort of clarify some of those legal questions in ways that we think would expand the scope of civil rights protections in this space. Thanks, Morgan. The next question is for the panel as a whole. So we've talked about forms of legislation such as the Honest Act, which is an example of policymaker action that tries to create a transparency around online advertising. There have also been other bills that have been introduced and as Natalie mentioned, some of them have raised some First Amendment concerns because they tried to direct how platforms regulate their content. So I would love to get everyone's thoughts on what should policymaker, sorry, what should policymaker action look like in this space. Do you think that sort of the bills that have been put forth go far enough, do you think they need to go farther? And maybe first I can either turn it over to Lindsay or Natalie. I think the very clear set was that we talked about it, that this legislation is not, it just started to work. But like I said, I really think that there is, you know, more must be done. And right now. Thanks, Lindsay. Natalie? Yeah, I mean, so I'll just briefly repeat what I said earlier, which is that, you know, I think first we need, you know, we need transparency about how content is governed online, both with paid and unpaid content. And the Honest Ads Act is like Lindsay said, a no-brainer place to start. You know, it's unfortunate that there's a one-man obstacle in the way of getting that bill passed. It is what it is, at least for now. The second thing, which I think, again, the most impactful thing by far will be to pass strong privacy legislation at the federal level. And I think one of the goals of that should be to make the types of discriminatory targeting and the disparate impact that Morgan talked about be as close to impossible to enact as can be done. And here again, I'll point people to the Ranking Digital Rights Report, which I think the OTI account graciously tweeted out the link to for details on that. And then the third bit is reforming corporate governance so that companies themselves have to be accountable, not only to Democratic oversight, but also to their boards of directors and to their shareholders. I think once those things are in place, I think at that point it will make it will make more sense to look at at section 230. There's some interesting proposals out there. So, you know, I mentioned, you know, clarifying what the original intent was, you know, and certainly clarifying that disparate impact liability laws, you know, that the CDA should not be a shield from that. That's something that I think there's very sound legal arguments for public knowledge also has a has a very interesting proposal out to carve out advertising specifically from CDA protection, and I think that's that's worth discussing. But again, I don't, I think it's going to be much more impactful to focus on transparency privacy and incorporate governance reform first, especially since those three avenues don't have the risk to freedom of expression that intermediary liability does. Thanks, Emily. We're nearing the end of our time but I want to give Morgan and Joe a chance to jump in if you have anything to add. I'll just add one point really briefly. I mean, I absolutely agree with said by Natalie Lindsay just now on these points. I think ultimately, the, there needs to be oversight and accountability. I think that means that companies are accountable for harms produced by the optimization decisions made by their algorithmic systems, whether that's targeting at delivery or the engagement and kind of personalization algorithms on the platforms and I think transparency. We need transparency in order to be able to know what those outcomes are. And then, but, but fundamentally, the only way that we're really going to shift this is is for the companies to be held legally accountable when when they, they're a human rights harms directly linked to the operations of their system. Great. Thanks Joe. So we're now ending nearing the end of our time. Thank you so much to everyone for joining today's events. Thank you to our wonderful panelists for taking the time to speak with us today. I hope you found this conversation productive and helpful and beyond the lookout for future work that I'm sure all of us will be doing in this space. So that will be on YouTube if you'd like to access a recording afterwards. And yeah, thank you and enjoy your day.