 All right, folks, I think we're gonna get started. Great. Good afternoon. I'm Kevin Bankston. I'm the director of the Open Technology Institute here at New America, which is dedicated to ensuring that everyone has access to an internet that is both open and secure. I wanna thank you for joining us here today at New America. For our conversation about Facebook after Cambridge Analytica, what should we do next? If you're not sure what I'm talking about, you might be in the wrong room. But just to level set, once upon a time, there was a fast-growing social network called Facebook that hoped to grow even faster by becoming a platform for other apps. And so in 2010, it launched the Graph API, an application programming interface that allowed app developers to access and use data from Facebook users who'd signed up to use their apps. But there was a big privacy catch. Not only could app developers obtain data from their users, but also from all the friends of those users. And although nominally, Facebook had notified users of this setup through their privacy policies, and there was a not particularly easy-to-find privacy setting for adjusting what data your friends could share about you, the default on that setting was for apps to have incredibly broad access to friends' data, and most ordinary users had little understanding of what was going on. And so for about four years, until Facebook tightened up access to friends' data with an updated Graph 2.0 in 2014-2015, untold thousands of app developers siphoned tons of data off of Facebook from people who didn't even use their apps. And the primary guardrails, protecting that data from misuse after it left Facebook's platform, were simply Facebook's terms of service for those app developers, telling them they should only use the data for providing the service users had signed up for, and that they shouldn't, for example, sell it all to a spooky political consulting company that wanted to build psychographic profiles on voters in order to better manipulate them. Of course, now we know that is exactly what happened, that in 2014 a researcher named Alexander Kogan used a survey app called This Is Your Digital Life, and was able to attract 270,000 Facebook users, and through access to those users' friends' data, was able to obtain personal information about, well, we're not sure, but we heard from Facebook yesterday up to 87 million Facebook users. Kogan then sold that data to Cambridge Analytica, a political consulting firm that worked with the Trump presidential campaign and the Brexit campaign, has bragged about influencing other political outcomes in Mexico, Australia, and Kenya, and based on recently released undercover recordings, has apparently used bribes and sex workers as part of its toolbox for influencing political candidates, which brings us to last month, when we learned about how Cambridge Analytica had obtained this data. We also learned that Facebook has known about Kogan's passing of data to Cambridge Analytica since late 2015, but did little to confirm that this misappropriated data had been deleted other than demanding the Cambridge Analytica certify that it had done so, while Facebook also continued to allow Cambridge Analytica to advertise on its platform until just before last month's story broke. This story has led to a firestorm of renewed concern over the state of privacy online, generally and on Facebook specifically, just as controversy has already been raging for over a year about how several of the big tech platforms have been subverted to help, or were subverted to help spread foreign state-sponsored propaganda during the US presidential election and some other elections as well since then. So now, as Facebook is losing billions of dollars in stock value due to lost public trust, and is promising to make extensive business changes to regain that trust, as policymakers in the US and Europe are rattling the saber of regulation, and as ordinary folks only now seem to be starting to understand how Facebook actually works, or at least how it worked four to five years ago, and what that means for their privacy, the simple question is, what now? What should Facebook do? What should policymakers do? What should users demand that they do in regard to Facebook or internet platforms generally? I will be talking to FTC Commissioner Terrell McSweeney and a panel of experts about those questions, and more generally, about the state of online privacy, and how we can improve it. But before we do that, I wanted to pass the mic to my colleague, Rebecca McKinnon. She runs an independent project housed within OTI called Ranking Digital Rights that is dedicated to answering another question that's very relevant to today's proceedings, just how well are companies like Facebook protecting their users' rights? She'll briefly give a preview of how RDR's latest annual Corporate Accountability Index being released later this month. We'll answer that question, and then we'll move on to my conversation with Commissioner McSweeney and then our panel with the experts. Thank you. Thanks very much, Kevin. I don't wanna take too much of your time, other than to let you know that the Ranking Digital Rights 2018 Corporate Accountability Index is going to be launched on April 25th in New York, and we have a flyer here, and then April 27th, there will be an event here in, right in this room we're planning to talk about it in person to people who aren't in New York. The index, the 2017 index can be found on our website at RankingDigitalRights.org so you can see how we evaluated companies last year. The index ranks 22 of the world's most powerful internet, mobile, and telecommunications companies on their commitments and disclosed policies affecting users' human rights to freedom of expression and privacy. And so there's a set of indicators that are looking very specifically at Facebook and other companies' policies affecting how they handle user data, and it will not surprise you. You can see on our website from last year, Facebook did not perform well on those indicators last year in terms of the policies that it disclosed, the quality of the policies, and also what it disclosed and didn't disclose, and you will not be shocked to hear that there wasn't a revolutionary change between 2017 and now. You can see our report when it comes out online on April 25th for all the details, all the downloadable data, everything else. The analysis will have the event in New York on the 25th, and then a similar event here on the 27th to discuss it in person, and people will be able to go through and discuss all the results in great detail. But one other just point, just in relation to this issue of user data, the industry in general is doing poorly. So, but Facebook's disclosures were towards the bottom of its cohorts. So, that's just a little preview. Thank you. Thank you, Rebecca. We look forward to reading all about that. And now I'd love to welcome Terrell McSweeney, FTC commissioner to chat about this issue. Commissioner. Hi there. Thanks for having me. Oh, of course. I'm gonna start with a question that I'm also gonna pose. That's the first question to our expert panelists, which is, is this a tipping point? Is this, say, like a Snowden moment in the context of surveillance where we might actually see significant changes in policy? Or is this maybe more of an experience moment where we'll see a lot of noise, but not a lot of action? I think you mean Equifax. Oh, did I say? Oh, I'm so sorry, Xperia. I think you just made your own point. Oh, goodness. Okay, well, let me start by just saying thanks so much for having me here today. I'm gonna give you my own perspective, not the official views of the Federal Trade Commission. So I'm not gonna pull any punches. And I'm also gonna be careful not to talk about what the FTC has confirmed, which it does have an open investigation into at least some of the conduct that is alleged here. But I think we should have a policy conversation. So I appreciate your first question, which is more or less, okay, so maybe 87 million people's information was misused, is this a big deal? And one of the things I was gonna say is, wow, we're not even talking about the fact that 250 million people's very detailed information was breached, not even a year ago. And unfortunately in that incident, didn't have a policy tale that I'd hoped. So I certainly hope that this is a moment of change. I think it's also a powerful moment because the General Data Protection Regulation in Europe is coming into implementation in May. So changes are being made in response to that. So I think that has a big impact at the same time this news cycle is having an impact on the story. And if it has an impact on just one thing though, what I would really, really, really like it have an impact on are the people that say to me when we have been talking for years about better consumer protections for the digital age that American consumers just don't care, okay? I think that is demonstrably false. I think we're getting some good evidence that people do care that consumer trust is incredibly important and ought to be at the top of everybody's list in terms of what they are concerned about for their businesses. And I think it's also really underscoring to me the fact that consumers are not necessarily understanding or anticipating fully all of the risks of transacting in their data on these platforms. I don't personally believe that we should be trying to put all of that risk onto individual consumers to anticipate what might happen to their data. And I think that's part of the policy conversation we ought to be having. So currently lacking any kind of comprehensive data protection rules here in the US, FTC is the primary consumer privacy cop on the beat through your authority to go after unfair and deceptive trade practices. And indeed you have gone after Facebook before. There's a consent decree that was negotiated with them in 2011 over some alleged deceptions around their last big privacy transitions. So, which leads to the question, how did this happen if the FTC, the cop on the beat, already did have this consent decree in place and presumably was policing Facebook? Yeah, I think that's 100% the right question. And look, I'm a sitting federal trade commissioner. I love the FTC. I think the people at the FTC, the staff at the FTC are doing an incredible job with some pretty antiquated authority. It's a 100 year old, 104 year old agency using the authority to protect consumers from unfair, deceptive acts and practices. It's been able to adapt that into the online environment. But at the same time, the agency itself has always called for stronger tools. And I think this set of facts underscores that the FTC is not strong enough as it is currently configured with its current authorities and its current resources to be the kind of consumer protection agency that is required for a moment in which we are connecting every part of our lives to the internet and to each other. So how do we fix that? Well, I would start by making sure the agency is adequately resourced. It's been more or less flat funded for the last few years. At the end of the Obama administration, the Obama administration did call for an increase in resources for the agency. But it has never been funded near that level. And so it's, first of all, under resourced. That's pretty easy to fix. I think it needs to think about its configuration. One of the things that it has been doing, and I'm very proud of sharing time at the FTC with this implementation, is we've been bringing more technologists into our work and bringing more researchers on staff. We have an office of technology research and investigations called OTEC that I think is a great first step in that direction. But I think we really need to think about institutional design and whether that kind of capability ought to be significantly expanded, maybe by creation of a Bureau of Technology, just like the Bureau of Economics, so that there's more horsepower within the FTC. The FTC itself also needs additional authority to contract with outside experts so that it can really have resources to evaluate what it's being told. So it needs in-house expertise and it needs some additional resources to bring that expertise in when it doesn't have it itself. I think beyond that, it has consistently called, and I think this is really important, for civil penalty authority, not just for data security and data breach violations, but for privacy as well. It needs rule-making authority that it can use for privacy and data security. It's also been studying some of the conduct that it finds very concerning. It's been looking at the data broker industry, for example, and called for more transparency and accountability for data brokers, and I think that's really important. But beyond that, I also think it could be making the case for the consumer rights that we need in the digital age, which include things like data portability and interoperability. Those are also meaningfully pro-competitive as well. So one of the big limits to what the FTC can do and what sort of sticks it has to work with is that your primary tool in regard to privacy has been authority around deceptive trade practices. So if someone misrepresents what they are doing with your data, that is within the FTC's ambit, but if they're doing something awful with your data but they're telling you about it and you've nominally consented, that's okay. How do we get past that? I mean, first off as a starter, is notice and consent at this point a workable model? Certainly Facebook will argue and has argued that its users and frankly the FTC were aware of or had consented with too and had noticed though, this is how graph 1.0 worked, this is what the product was. Where do you go from there? Well, I mean, all right, so the idea that notice and consent is a framework that can adequately protect consumers in this environment has been described as quaint and I think that's correct, okay. So no, I don't think that we can continue to rely solely on that framework. Now, I don't even think the FTC itself is advocating that it rely solely on that framework, but the FTC does have limits to its authority and so it does look for deception, which I can see if you're not telling people truthfully what's happening to their information and how it's being used and that's actionable, I think that's very important. But of course, it's been looking at the role of consent and how that's been playing out in the marketplace. It's been emphasizing best practices around requiring often notice that it's clear and timely and outside of just long terms of service privacy policy agreement for a number of years and it's been saying that those choices need to be offered around the collection and use of sensitive information in particular. So I think it's been consistently laying out best practices, whether the industry has actually been following them, I think is a different question and raises the issue again of whether the FTC is strong enough and I'm arguing obviously that it's not. I also don't think, by the way, the FTC needs to be doing this all by itself. We are sitting here a year after Congress repealed the FCC's stronger broadband privacy rules and I think there was no justification for that. We need more than one consumer protection cop on this beat. The privacy settings are confusing. This is not a unique problem to Facebook but it's gotten more complex. I wanted to share an anecdote that describes that and illustrates some of the problem with notice and consent at this point. When this scandal broke, I went back to look at the settings and I've been working to some extent around Facebook privacy for a long time and I'm fairly familiar with it but when I went into the settings I found a setting that said apps your friends use and it was all about your friends, whatever they can see on Facebook, they can share with an app subject to these check boxes. All these check boxes or like 90% of them are checked by default but you can uncheck them which means that Facebook didn't actually update its privacy settings when it updated graph in 2014, 2015 so there's this weird vestigial setting and so when I asked Facebook, does this setting mean anything at this point? They were like, well, we hadn't gotten around to fixing it or changing it or getting rid of it in part because there are some edge cases where some of those check boxes do still matter and they named one and I was like is that the only one and it wasn't clear whether they were sure that was the only one. So they're clearly sorting that out now but like if they're not even clear on how their settings work how can we be clear on how their settings work? Again, I mean, I think you're identifying some of the weaknesses of the model that we're using and what I think is interesting about this, I don't know if anybody else had this experience in the room, you know, shortly after the latest news broke for example, I had the opportunity to sit down with my mom to go over her settings because she was very concerned but also wanted some help in figuring out what her privacy setting is actually worth. So I was walking her through the flow and so we went into the apps and platform and I said, you know, would you wanna share this information through apps and this is where you do the settings and you can turn off platform if you're really, I would turn it all off, she said, you know. I don't know if anybody else has had that conversation with their mom but it's kind of a familiar conversation at this point for me and what it suggests to me too is that people are trying to exert choices over how their data is being used and particularly how it's flowing out of that first party relationship that they have with whatever service they're using and they think they're exerting some choices and they may not necessarily know that there are more choices that they need to and controls they need to be looking at. Now the FTC has been looking at this issue, okay. So the FTC looked at this for example recently in our PayPal Venmo case in which we said, look, you can't have default settings that you have to have several different options to navigate through in order to keep something private. If you're trying to keep something private and you think you've set it to private but then you have to do three more steps, well that's gonna be problematic, right. And so that's a good case I think of in that particular set of facts where the FTC was looking at whether consumers could really navigate the settings that they were being offered and the FTCs also looked at, I think of these cases more as trickery. Clever technical workarounds to asserted privacy choices. So in our case against InMobi for example, if a consumer has said, don't track my geolocation app then running program to serve ads that is triangulating your geolocation using your wifi is probably going around that asserted privacy setting, right. So we have to find ways that also to make sure that technology is following the asserted privacy choice of consumers. So I think the FTC has been looking at these issues. The fact that we have more than one case already on this kind of thing suggests to me that there's some problems out there and that we need to be continuing to be very active here. So what role might your unfairness authority play in trying to protect consumer privacy? Well unfairness has been particularly influential in our data security cases. These are cases involving whether security practices were reasonable or not. Unfairness is a really important authority. It's also challenging for the FTC to use, partly because of the way courts have limited it over the years. So we have to have fairly clear likelihood of harm. And while we've been talking about harms that are not just economic harms that are harms that involve invasion of private spaces such as turning cameras on in people's bedrooms and emotional harms associated with revenge porn and things like that, it can be very tricky for the FTC to reach some of the conduct using just the unfairness authority. It also, I mean, I think this is an area where I continue to reiterate the FTC cannot go it alone. It needs to partner with states and other agencies as well because one of the things that happens to the FTC when it starts to use its unfairness authority very aggressively is that Congress in the past has stepped in to try to limit it pretty severely. So the agency has been incremental and cautious in developing how it uses that authority, but with good reason. So what about market harms? You also are a competition authority. There's a lot of talk in the air about platform monopoly or breaking these companies up or a variety of other ideas to try and deal with the fact that they are big. What role do you see for the FTC there? Or how do you see competition intersecting with this issue? So I think competition is incredibly valuable and I think more competition would definitely benefit consumers. One of the tricks here though has to do with the economics of how these markets work because of network effects and because of the incentives within them, it's not completely obvious to me that just getting more competition is going to yield better outcomes and better protections for consumers. That's why we need additional regulation to help really direct the marketplace towards the outcomes that we want around consumer data use, data security and privacy. So more competition is good and I think the FTC using its competition authorities aggressively is terrific. It should be also advocating, I think, pro-competitive policies like data portability and interoperability, but I think we need to be mindful of the fact that we can't just rely on competition as a market force to correct for all of the problems that we are potentially seeing here. Well, and then there is perhaps Congress. You've talked about what Congress could do to help strengthen your agency. What might it do here to strengthen consumer's hand when we're talking about their privacy online? Well, I think that Congress could start really thinking about what are the laws it needs to pass in order to better protect consumer privacy. So one thing it could definitely do is stop passing laws that undermine it like repealing the broadband privacy law. So we could start there and we could build on that by really taking a look again. I can get a number of ideas that I'm looking around the room. A number of people here have been talking about for a while, which is forced comprehensive privacy legislation, but also real comprehensive data security legislation, cybersecurity legislation. I would argue again for more transparency and accountability for data brokers. Obviously you've been talking about more resources and strengthening the FTC specifically as an agency. I think that's very important. I also think talking about some of the rights that consumers really deserve here, rights to and control over your data and meaningful control, meaningful ways to port it around, meaningful interoperability are important conversations that we need to be having. And that's the consumer protection angle. I do wanna emphasize that one of the things that we're seeing play out in all of these stories about Facebook and Cambridge Analytica has to do with bigger issues and bigger risks than just the consumer protection harms that we're all concerned about. The potential for the technology to be used in disinformation campaigns to undermine democratic institutions. I mean, I'm personally very worried about the use of bots and filling the FCC's comment sections for the repeal of the open internet order, right? So manipulation of democratic institutions is a deeply harmful thing. That's gonna require more responses than just addressing privacy and consumer protection issues. So there is this new comprehensive general data protection regulation that's about to come in force later this month in Europe. Does that strengthen, weaken, how does that impact the argument for trying to get a comprehensive data protection bill done here in the U.S.? I really do hope it strengthens it because if what in fact is happening is that the major technology companies are coming into compliance with GDPR and they're complying across their platforms globally offering the same choices to consumers in the U.S. as they are in Europe, then it seems to me a lot of the opposition to the burden that allowing U.S. consumers to codify those rights is eliminated. So I think that it could have an effect of making it easier for Congress to really think about rightsizing consumer protection to the digital age. I do hope you're right about that and I really appreciate you're taking the time to come and chat today. Commissions are having me. Thank you and we're gonna invite the rest of the panel up right now. Hey gang, how are you all? So I'm gonna let everybody introduce themselves. We'll just go down the line and what I'd like is for everyone to introduce themselves and then briefly answer the same question that I posed to the commissioner. Is this a tipping point or is this a Equifax moment rather than a stubborn moment? Please? No, starting with me. So I think this issue is gonna have legs for a little while subject to some other massively news breaking issue that might come up, but I think it's got some legs. I think it's been in the work for a long time and there have been privacy questions that we've all thought about for a long time. And so I think this is really gonna, I think this will make a difference if the FTC investigation will be ongoing and I hope a spotlight will really remain on how the company responds to this, whether it's responses are sufficient to protect consumer interest and privacy and their data, but also any potential impacts what their responses might have on competition. And we can get into a little bit about this later, but I think we need to insist on very important consumer protections, but I think we also need to be mindful of any unintended consequences or overcorrection that could inhibit some of the things that we love about the internet about openness and ensuring that competition remains vibrant and Facebook doesn't use this as an opportunity to sort of shut down competition in the name of privacy and security. Great, and who are you? Oh, sorry. I'm Caroline Holland. I am currently a Mozilla Tech Policy Fellow working on competition in the digital ecosystem and an open and healthy internet. Great, thank you. Hi, my name is Harlan Yu. I'm the Executive Director of Upturn, a nonprofit based here in Washington, DC that focuses on technology and civil rights issues. And so I think the answer to your question is, I sure hope so. I guess we really have no way of knowing, but it does seem like, especially because it's Facebook and because of the links that it has to political campaigns and political groups that people seem to have intense interest in that the story may have more legs. It certainly seems like Facebook, given the public pressure that they're under, is willing to make some changes and hopefully positive changes. But I think that's gonna require an ongoing conversation between advocates and the company to really see what those changes actually are. Director, excuse me, Director of Privacy and Data at the Center for Democracy and Technology. Thanks for having me. I think the answer lies, I think, in your framing of it being a tipping point. And the reason I say that is because I think there have been little chips away at the public's trust. And the internet and even digital systems writ large, probably, it's fair to say. So I think that has had this effect of having sort of this crescendo. And when it was tied to a political campaign where there already is a great amount of, let me think of the right word. Thanks. Thanks. Thank you. Yes, yes, thanks. That exists in the country. And the fact that there was a company that sort of portrays itself as being friendly to consumers and for their use. And in fact, that's the value proposition that they are getting. Facebook says here, we'll make this free for you because we wanna connect you. I think all of those things crescendoed into this moment. So something will happen. My hope is that it will be baseline privacy legislation in the United States, but I'm an optimist. I think, regardless, I think Facebook will have to face the music. We'll have to change its practices and we'll have to become, at the very, very least, more transparent. I'm David Blattic. I teach law at Georgetown Law School and I was the former director of the Bureau of Consumer Protection at the FTC when we did our investigation of Facebook. I'm not sure this is a tipping point, but it will be a significant moment in a couple of ways. One is it depends on what Facebook says. And Facebook has a lot to answer to. There are gonna be public hearings next week. Mark Zuckerberg is gonna testify. I think a lot depends on what path Facebook decides to follow. I also think it's a tipping point in this sense. This is the first major breach of a consent decree the FTC entered with a internet giant. Yes, there was a dustup with Google shortly after it entered a consent decree. But this is, in my view, there are major issues about Facebook's willingness to comply with the federal order. And I think one of the things that we're gonna watch closely is what does Zuckerberg and Facebook say about that? And how do they respond to the FTC? If this investigation goes on until next year, this may be a tipping point because it's gonna force the agency's hand. In terms of Facebook's response, they've now announced a whole bunch of changes trying to regain user trust. They'll be simplifying their privacy settings, although a lot of that was already planned for GDPR compliance. They're clarifying their terms of service and privacy explanations, but not yet apparently talking about actually substantially changing any of those terms. They stopped working with offline data brokers to facilitate ad targeting, which is good. They closed a gaping privacy gap in their people search function that was allowing mass scraping of the public parts of profiles, and I'm wondering why they didn't fix that sooner. And just yesterday they announced that they're gonna be severely narrowing what data is available to app developers over a range of its APIs. So it seems like they're doing anything and everything they can, in the short term at least, sort of significantly changing their business model or their terms of service to tam-town concerns. So I guess the question, first to Harlan and then to the rest of the panel, what else should Facebook be doing now? And if you were in the war room at Facebook at this moment, what would you be advocating for? So the Cambridge Analytica scandal raised two related but I think distinct issues. Obviously we're gonna be talking a lot on this panel about user privacy and the scope of sensitive user information that Facebook makes available through its API to its app developers. But I think there's an equally vital conversation that we need to be having that Commissioner McSweeney alluded to, which is as Facebook starts to raise the walls of its walled garden, how is the public going to then scrutinize what's happening inside that walled garden? In particular, how Facebook's business model is vulnerable to potential abuse and how the public finds out about and addresses some of those issues. And so the Cambridge Analytica story obviously got a lot of attention as I was saying because of its links to political groups and political campaigns. And I think what the story did was intensified people's interest in the ways that Facebook data and the Facebook platform was potentially being used to manipulate both our elections and the political discourse in this country and elsewhere. And so from that, Facebook has promised some amount of additional transparency, especially after its own internal investigation of Russian interference. And so it recently said that it wants to establish a new standard for ad transparency that would quote, help everyone, especially political watchdog groups and reporters keep advertisers accountable for who they say they are and what they say to different groups. And so I just wanna spend a few minutes talking not about the user privacy side but actually about ad targeting. And so when I talk about ads here, I'm really talking about any message on Facebook that's touched by money. Anyone, whether you're a company or not can spend money to boost your message to a certain segment of Facebook's users. And so your mind probably gravitates toward consumer products. Chase has a new credit card that I want to target to certain consumers. We're obviously talking also about political campaigns, political groups that want to do issue ads. And we're talking about other perhaps nation states that want to spread intentionally misinformation or to exploit certain divisions, political and social divisions in our society. But I'm also talking about ways that advertisers might be using Facebook's ad targeting platform to prey on vulnerable consumers. These are, for example, ads that are trying to find patients for fraudulent opioid rehab centers or aggressive ads for for profit colleges or sketchy loan repayment schemes or ads or ad campaigns that drive illegal discrimination especially if we're talking about advertising in finance and in employment and in housing. And so these are the kinds, among legitimate ad targeting, there's a range of possible abuses that are possible in the system. And this goes to the core of Facebook's business model which is finding certain segments of Facebook's users and targeting specific messages based on paid messages to those users. And so the main question that I have is how is the public in the future gonna be able to scrutinize this ad targeting behavior and address a lot of these abuses? And what is Facebook gonna do to help address these issues? How are they gonna be transparent about what's happening? And so Facebook has made a few small promises so far but it seems clear to us at Upturn that there's a lot more that the company can do and I'll just go over four potential things that Facebook could do real quick. So first, Facebook has slowly started to make advertisements on its platform available to public scrutiny. And so they've been doing a pilot project in Canada where if your user and you go to an advertiser's page you could actually see the list of running ads that that advertiser is currently running. And so in principle, all ads are visible but it's also a very manual process. If you're a researcher, it makes it really difficult for you to even know the universe of all ads that a particular advertiser is running. In addition, many advertisers have thousands of ads where they're AP testing different messages and you have no idea what the scope of those ads and the reach of those ads are. And so the first thing I think Facebook could do is in the same way that they've built a very robust API for user data, also build a robust API with search functionality for advertisements that allows the public, researchers and journalists to scrutinize ads more effectively. The second thing is that Facebook has started to make enhanced transparency promises especially around election ads. But that's a very narrow slice of the problem in my opinion, right? They're looking at federal election ads that specifically they're mentioning a candidate or specifically mentioned an election or trying to get people to vote. But it ignores the broader range of abuses that I've described. And so they should really be turning their attention not just to election ads but really to all ads that are being run on Facebook's platform. Third, in order to have effective accountability it's important not just to know what the text or the content of those individual ads are but it's really important to know the scope and the reach of those ads. So here I'm talking about, for example, exactly what the explicit targeting criteria was for the ad campaign, right? Here's an ad, who is that advertiser trying to target? It gets a little complicated technically here because some advertisers don't use explicit targeting criteria, right? They might use something like a custom audience or a lookalike audience where they upload a list of existing voters or consumers and Facebook uses its special sauce to find a lookalike audience of Facebook users that have the same features. And so what Facebook needs to also do, in addition to making the ads themselves more transparent, is to expose targeting criteria and also information about the audience that a particular ad actually reached, right? How many people? What are the demographics of that audience? And fourth, finally, to the extent that Facebook is already doing enforcement, right? Where they're doing their internal enforcement to take down bad ads, they should really disclose to the public a detailed accounting of all the bad ads that they're taking down and for what reasons they're taking down those ads. So those are the kinds of steps that I think that would provide real transparency and real accountability. And these are, I think, the kinds of steps that will help raise the public's trust because it's not just Facebook telling us that they're doing these things to try to stop these nefarious behaviors on the platform, but actually the letting the public scrutinize this and to verify that this is actually a case. And so I'll just put a plug on a report that the upturn team is releasing in the next couple weeks that we hopefully will serve as kind of a public and an advocate's guide to add transparency and what we should be pushing at Facebook for. Thanks, Harlan. Look forward to reading that. It does seem like we're building a manipulation layer into the internet that we don't fully understand and can't control and greater transparency would be helpful in grappling with that. In the interest of transparency, this reminds me as well, I should make sure to disclose that Facebook has provided and does provide financial support for some of OTI's work because although clearly we disagree on some things, we also are aligned on some important issues for internet openness and security, including especially the encryption debate. But so in terms of my wish list, I would add greater transparency in the sense of appearing in venues like this. We did invite Facebook, but they are busy. But I will say that they've done a lot more press calls than usual lately on the record and releasing transcripts and stuff like that. And obviously they are testifying to Congress though I don't feel they had a whole lot of choice in that, but seeing more of that public engagement is and would be great. But as far as other people's wish lists if they were in the war room, any other things or shall we move on to another question? Let me sort of say what I think they should talk about doing. I mean, one of the real problems in the Cambridge analytics debacle sort of shows this is how little control Facebook was exercising over third party, particularly app developers. I mean, so Facebook has recently acknowledged that they don't really have contracts with third party app developers, they don't have any remedies in case there are a deliberate over harvesting or sharing, they obviously did no due diligence on any of the third party app developers. And so when it comes to how do you solve the Cambridge analytics problem? Part of it is there has to be much greater oversight control and auditing of third party developers. And yet this whole episode has proven that that part and that was one of the aims of the consent decree. Part one part of the consent decree required Facebook to identify threats to privacy and to plug those threats. And since third party app developers have access to this data, that was an obvious vector for privacy violations. And so one would have expected that there would have been some controls placed on app developers and yet as this debacle unfolds, it becomes more clear that there really were none. And so just go down the list. Some kind of due diligence about who's getting access to the data, some kind of contractual lockups that give them power to require audits, oversight, some sort of certification of non-overcollection or sharing. Audits done by Facebook or an outside party to ensure that there was compliance. And so we're weeks or months into this and Facebook cannot assure us that the Cambridge analytic data isn't still floating around or that Cambridge analytics or Kogan, the researcher have actually destroyed them. I teach law school. We teach students how to enforce these kinds of promises and it doesn't appear that Facebook has any remedies at all that are effective to basically discipline third party apps that have very broad access to consumer data. So I could go on. But what I'd like to hear from Facebook is what are we gonna do to control this? Because Yogi Berry's famous line was this is deja vu all over again. Well, we saw all of these problems back in 2011. And the consent decree was designed to avoid exactly the Cambridge analytics problem. So one of the things I'd like to see is Facebook come before the senators as Zuckerberg is gonna do next week and come in with a real list of things that are gonna control this part of the problem. I realized, I agree with Harlan, there are lots of other problems. But in terms of providing minimal safeguards for consumer information, those are some of the things that I wanna see Facebook talk about. Now that you bring it up, the consent decree that you helped negotiate in 2011, can you talk a little bit about, because you seem to indicate earlier that you do believe it's been violated. Oh yeah. So I'd love to hear first if you could enlighten the audience a little bit more about what that was and what that was about. And then what you expect or want to see from the FTC in regard to that now. Well, so again, this goes back to in part third party access because in 2009 and November and December, Facebook made two changes to its privacy settings that pushed a lot of private information to be public and also gave third party apps access to information they were not supposed to have. And one of the things that's ironic about the FTC's complaint, the FTC said that part of the deception was allowing third parties to get access about how people exercise their political views without their consent. And so I mean, this is why I think this is, I've seen this movie before and it didn't end well. So one of the things the FTC did was try to rein in third party collection. And if you look at the consent decree, it draws a bright line between users who are people who actually post things and third parties who actually harvest things. And the goal was to limit third party access unless there's clear notice and clear consent. Now Facebook's gonna say, well, the settings that they had allowed sharing, mass sharing, if you put friends to friends setting. On the other hand, the question the FTC was asking is what are consumers reasonable expectations about what that means? And so one question to ask Zuckerberg next week at his hearings is, do you really believe any of the friends thought that something like Cambridge Analytics was gonna happen to them? Was it where your notice is back then, back in 2013 or whenever this happened, 2014? Clear about that? I've looked at those notices. I don't think they meet that test at all. But that will be part of the FTC's inquiry, both in terms of whether the consent decree was violated or whether there were fresh violations of section five. And so I think that's part of my concern. The other part is one section is devoted basically to forcing Facebook to look at vulnerabilities. Where is consumer privacy at jeopardy and plugging that, those holes? And that was designed really to respond to downloading by third-party apps. And it is quite clear in the aftermath of Cambridge Analytics that Facebook paid no attention to that part of the consent decree because there are no controls on third-party downloading. There's no remedy for unlaw, for harvesting data that you have no consent for or sharing that data with third-parties, which is why Cambridge Analytics is such a scandal. This is months and months, this is two or three years, maybe four years since Facebook has known about this problem. And yet it still has done nothing to fix it. That, and so a lot of the things that Facebook has announced, the new platform policies that Mark Zuckerberg has talked about, we've heard all this stuff before. The question is, is Facebook really serious about moving forward now? On the subject of the consent decree, I wanna jump back into what Facebook would argue I expect, which I think they would say when y'all negotiated this settlement in 2011, comparing that to what was going on in 2014, they would basically say, when we negotiated that settlement, this is how Graph 1.0 worked. These are the disclosures that were made to the users. These are the settings that they had. What changed between 2011 and 2014 to make that not okay anymore? Because their position is, we didn't, it was Kogan who violated the rules. Our product was working the way it was designed until we changed it again later in 2014. Right, but the consent decree was to avoid problems with people like Kogan, right? It was to force Facebook to give clear and better notices. That was part of that. That was part of the consent decree. And section four, which was looking at vulnerabilities, that's the key provision. And in my view, Facebook did not pay any attention to that. So the real question is, look, the question for Facebook users is at 2013 or any time after the consent decree was entered into, would friends of friends understand the scope of harvesting of their data? That's the question. Would the 57 million, 87 million Facebook users who had Cambridge analytics take their data or Kogan take their data? Would they have understood they gave permission for that? And the answer I think is plainly no. Well, so what happens if the answer is no? Well, I think in terms of FTC enforcement, I don't think it really matters because I don't think Facebook has any argument this isn't a violation of section five because section five turns on what consumers reasonably expected. But I also think they don't have a defense to my view, which is they violated multiple provisions of the consent decree. Now what turns on that is, if there is a violation of the consent decree, there's gonna be a very substantial civil penalty. At the time we did the Google case, the civil penalty statute provided for $16,000 per violation. The FTC has always considered a violation to harm to an individual consumer. So now if you measure, if you multiply $40,000 times 87 million, only Harlan would be able to figure out how what the answer to that would be. I'm just gonna jump from 16 to 40, that has changed. The statutory. And so you're talking about an astronomical civil penalty. Obviously that would not be the starting point for the agency. But I think there's likely to be a very substantial civil penalty in this case. Well, so we're talking about what the FTC might do. Right. There's also the question of what Congress might do or should do. And to start that conversation, I'll move over to Michelle. Great, so Congress, what should you do? I think yelling at Mark Zuckerberg is a start. It's not necessarily gonna make change though. And at some point I wanna go back to consent degrees because I think in some ways this illustrates where there are some vulnerabilities and weaknesses in the consent decrees themselves. Where perhaps Congress could in maybe a more discreet way, more discreet at least in baseline privacy legislation make some fixes to make them have more teeth. For example, in the Google case, I think the fine was $24 million, which is like half a day profit. But it was the most we could get. I know. So this is my point. So to make it really matter and actually have some heft when the FTC love you so fine. So I think what Congress should do is not remake the GDPR. And for those of you who are in advocacy, you might know that this is not exactly in line with a lot of what advocates are saying. I think the GDPR is fantastic. I think it's coming into force and it's forced companies to incorporate a lot of user rights into their platforms, products and services. But I don't think that we need to duplicate it for that reason. That's not to say that there aren't elements of the GDPR that are instructive and could be great in baseline privacy law. But I don't think that enacting a baseline privacy law needs to replicate the failure of consent. Consents have a place, but I think instead what we should be looking at is expectations. And so I think the way that Congress should imbue a baseline privacy law is with the ideas of what is a person's expectations as defined by what kind of user agency do they have? What sort of transparency is available and what sort of accountability is attached to those things? So sort of looking at the idea that the way that we interact with these platforms is obscured. So in other words, you really can't make consent for the most part and everybody knows this. You don't see the hundreds of eyes that are looking at you as you post something on Facebook and all of this is by design. So to push to the forefront the idea that if you're going to imbue expectations into your platform and you're going to use these values of agency, transparency and accountability that some of those changes have to come from the design side. So creating some design standards. Now this isn't to create some paternalistic law that says you have to have fonts this size is that kind of thing. It's more about what sorts of interactions will make clear to a person what the true value proposition is. Right now a person that I know put it very well and said when the companies leverage your data 100 times, that's like a price increase that you don't know about. Right, you get nothing out of that except the free service. But that I think argument is ringing hollow now. And so to the extent that there are ways to do this and there are design principles that allow for more transparency, more accountability, more agency, not just the GDPR. I think those need to be imbued into a law. I think also the idea of and just going back into accountability because I think that's so crucial the idea of making public disclosures and drawing on some of the other laws that exist. For example, making CEOs certify public disclosures on a quarterly basis. Sarbanes-Oxley does this, right? It forces the CEOs to have skin in the game. I think other areas are auditing requirements, making the companies do data impact assessments. Those things can and are doable. They're not easy technically for sure. They're challenges. But I think that there are discreet ways that would make a huge difference in Americans' privacy protections. So I think that's just that. Just baseline privacy. What you described at the front end is not a modest proposal. No, no, it's not. You know, I'll admit I share some skepticism about the value and political viability of comprehensive baseline privacy in the U.S. at this stage. Sure. Considering that there was a much worked on, you know, a lot of smart people focused on it, a proposal from the Obama White House in 2015 that basically wasn't good enough for either side of the debate. And I'm not sure if the calculus has changed that much, but it does raise the question of if not that, if not some sort of GDPR light for the U.S., what? And you mentioned a couple of neat targeted things, the CEO certification idea and impact assessments are great ideas. What other more targeted, I mean, to the extent there are staff who are wondering, what can I write right now that my boss could introduce and look impactful on this issue? I may have talked a little. Or yeah, or what can we do that's really strong that will strike fear into the heart of Facebook and other companies and perhaps impact their behavior? What should we be doing? Well, so to be frank, you know, striking fear in their hearts is not really what I'm concerned with. You know, to me, my eye is on the ball of how do we get protections for people? And of course that comes with protection should come accountability. I do think strengthening consent decrees would actually be great. It would strengthen the FTC, which we've heard, you know, and everybody probably in this room knows, truly needs more tools, more resources to be able to do its job, especially with everything that's gone on. More public transparency around consent decrees, more significant penalties for violations. We have, actually, we're gonna be just a plug, something we're gonna be coming out with is something, here are very specific recommendations on consent decrees and hopefully that's something that could get bipartisan support. I could see it. I think in some ways you can look at the Republicans reform playbook and use some of the ideas of good governance to get something like that to be more palatable. I think also another area, and David has touched on this, is the idea of data access by researchers. So it's something that I feel like has been sort of avoided and it's partly because it's a very tricky subject. We don't want to shut down innovation. We don't wanna shut down open access. That is what the internet is built on. But there are ways to create obligations for researchers that don't exist right now. For example, if you're a federally funded academic researcher, you follow the common rule, which means there are ethical guidelines and you go through an institutional review board. Those institutional review boards, if anybody has ever gone through them, they're fairly worthless. Not for any reason of the people who said on them are in good faith, but they don't ask for things like terms of service review. Not to say that if you review a terms of service and it says you shouldn't do this, which almost all the platforms say, by the way, shouldn't do a lot of what researchers do, but it's imperative for there to be a review of that, some accountability for the researcher. There should also be, I think, somebody mentioned certifications for the researchers so that they are held to some obligations, not just for what they intend to do, but how they're protecting the privacy and security of that data. Facebook's data sharing agreement was very light on details and very light on accountability, of course, and I'm not sure that wasn't by design. I think the idea is, let the data go and then we don't have liability. We don't wanna create any liability, and so the other aspect would be creating a chain of liability in this ecosystem, which, again, would not be easy, but I think at the end of the day, you start with the platform or the product or service who is creating the risk for the user, the benefit and the risk, and then you go down the line and you decide and assign what are the rules and what are the liabilities, and I think those can be chunked off in small ways, maybe consent degrees is a part of that, maybe the certifications are another part of that. Other ideas? I'm guessing, David, you're all for strengthening the FTC and the ability to enforce consent decree. I'm happy to repeat everything that Cheryl said before because she's absolutely spot on. I would say this, that to the extent there may be smaller pieces of the privacy issue that Congress might tackle as data brokers. This would get at many, but not all of the problems, and I think given some of the breaches that we've had and some of the problems with large data brokers, there needs to be something like the Fair Credit Reporting Act, but much stronger for data brokers. I mean, the fact is people are worried about what the NSA knows about them, but Axiom knows way more. And so does Facebook and lots of other companies, and yet we don't have any effective, really regulatory tools for any of them. And these are massive data pools. And to the extent that they get merged, you have massive data seas, and that's where real risk to consumers lies. Yeah, I'm somewhat skeptical of going back to the platforms and away from the brokers, that there will see strong specific use restrictions in law or requirements about consent. But I think the possibility of much stronger transparency requirements is definitely in the offing. I'm certainly, they've been talking about that in the context of ads, and I expect they are talking about it in the context of who actually gets your data. But there's also the question of political viability and timing and whatnot. So for example, I have a crazy idea, which is we already have, there is a single law that is the strongest privacy law in the United States. It's called the Video Privacy Protection Act, and it protects records about what you watched, what you rented at a video store, and now protects what you watch on Netflix, and that got passed after Supreme Court nominee Bork's video records were obtained by journalists, and Congress freaked out that the same thing might happen to them. And hence, the strongest privacy law ever. I don't see any principled reason why that shouldn't extend to the content that I interact with online. I shared this idea with a staffer who was like, sure, but that's in the criminal code, and that means it would go through judiciary, and nothing's gonna happen in judiciary. We gotta think of things that can go through commerce. And so what do we make of the like, what's actually possible right now? What's the timing? I mean, clearly they're not gonna pass anything this year. I mean, they're basically already done because of the election. But what seeds do we need to plant now, and where might we best plant them? I mean, I'll just say briefly and let others speak, but the what happens in the fall is huge. It will decide if the Democrats retake the gavel, retake Congress, then the possibilities are much greater for baseline privacy law or any kind of updates to privacy in general. I think it's funny, the jurisdictional question because it's sort of privacy is just notorious for being in 100 different committees or at least people believing that it should be in 100 different committees, and especially when you have a high profile case like this, they're all scrambling to figure out how they can fit it into agriculture and all kinds of crazy stuff. So I think, and great, fine, this is the way our democracy bumbles along, but I think to the extent that we can provide staffers with the correct facts about what happened, first of all. That's something I've noticed even journalists, excellent journalists using words like scraping and access interchangeably. Those aren't the same thing at all. And so making sure that we, as advocates, create a fact-based situation and they have that. And then offering different committees different solutions. I think that is up to groups like CDT and other groups to really work hard and make sure that the committees have information about what they could do next. I also think it's important to bring in Republicans that are interested in this issue. They hadn't said much for a little while, but now they are. And I think that's a really important development that I think a lot, especially in DC, can get sort of ignored by partisans. And I think it's important to engage both parties in this process and explain that this is a truly bipartisan issue or should be. Moving on to Caroline and the issue of competition, which is your expertise. What role does competition law, anti-trust law play in addressing a situation like this, if any? So I think one of the questions that we've heard after this is, well, wait a minute, maybe if we had more competition, things would be better. We've been seeing for months a lot of headlines, talking about the power of big tech, the concentration, the consolidation that has occurred, and, you know, isn't something, can't the anti-trust laws do something about this? And then the follow on is, and maybe if there was more competition and they were doing their job, things wouldn't be so bad. So I'm here to say, anti-trust can play a limited role in some of these sort of bigger market structured questions, probably not gonna be so good at addressing the consumer privacy questions. Yes, if we had multiple social platforms in an ideal world, you might see competition on the basis of privacy. The trick about social platforms is we really don't wanna have to go to six different social platforms and find all of our friends on each one. One of the value, the network effects everyone talks about is the value in this service is that the more people who are on it, the more valuable it becomes. So competition is not necessarily something that consumers really would want in the real world. You might want options to go to work competition and I'll talk a little bit about some ideas for promoting competition, but you do wanna take a second to talk about the limit on anti-trust laws. I mean, anti-trust laws is a law enforcement function that prohibits conduct by firms with substantial market power, a monopoly, if you will, from taking actions that will harm the competitive process and also it can stop mergers that will lead to a lessening of competition. This can have some positive impact on how these platforms compete and the actions that they take. Another important tool besides the anti-trust laws is, or unless you consider it one of the anti-trust laws, is the FTC's section five authority. Section five is believed to, and I believe it covers something more than just the anti-trust laws. It was passed after the anti-trust laws and Congress, and so the argument goes that Congress must have meant something and this is not just the anti-trust laws. What section five does is it prohibits unfair competition. So query how the agency could use that authority, can it find actions that Facebook is taking that violate this principle of unfair competition. So those are somewhat limited tools. They might be able to nibble around the edges and get it bad conduct, but one of the things I've been thinking about is how to promote more competition. How do we put competitive pressures on a company like Facebook and is that through better data portability? It's creating a meaningful way to port your data, machine readable data to some other application or service that would be able to create some sort of networking service that consumers want to go to. This then sort of goes into some of the questions that I'm now exploring and trying to think about and I think I still have more questions than answers on this is the importance of data portability, interoperability and how the use of APIs work into this. I had been, side note, I've been thinking about how more open APIs help promote competition in the platform space. Promote competition, promote innovation and ensure the openness of the internet that we all want and that we get a lot of innovation from. And then Cambridge Analytica happened and I was like, oh shoot, I need to step back and think about what is the concept behind APIs? Is it more responsible use of APIs and how do these feed into the digital ecosystem? So one of the warnings I want to kind of put out there and something that I think we shall be looking for and I think the FTC is well-tooled to do this because they have the Office of Technology, they have the Competition Bureau is to make sure that Facebook as it starts to review its APIs, review its API policies and again I want to be very careful to say I think there are a lot of things they should do to protect privacy, to protect security to make sure that people willy nilly can't get access to all of our consumer data but at the same time make sure there is an overcorrection in terms of shutting down access to data not just personally identify that Bible information but data that helps developers come up with exciting new programs and apps that people want to use on Facebook that could ultimately compete with Facebook and Facebook could have the incentive and the ability using its APIs to say, you know, let's not let this developer access data because I really worried that's a threat to us. Let's shut that down. So some, so yeah. Let's pause on this because this is something that a number of us have been thinking about including Commissioner, you can go look at some tweets about it if you like. You know, and first let's distinguish a few things. There's portability, which is getting your data out. Facebook does right now have a tool for getting your timeline out but it's basically built for you to browse expressly and not really suitable for uploading into another service. Let's assume that there are services out there. And they built it for that reason. And they built it for, yeah. They built it for you to have your data but not for you to take it somewhere else. GDPR is gonna require some level of portability including machine readability so that you can move it somewhere else. Remains to be seen how people are gonna implement that and that'll be really interesting. But then there's also the issue of how do we, how do we come up with an environment where something can actually get big enough that you'll actually even want to move your data to it. And that's where you get into interoperability because at least from where I'm sitting, one of the more plausible or maybe even the only plausible version of a network getting big enough at this point now that Facebook really bought the two networks that were getting big enough to compete WhatsApp and Instagram, is to be able to leave Facebook while still being able to communicate with people on Facebook. Because really what Facebook is selling right now is not its platform, it's the people on the platform. And if no one feels like they can leave because everyone is there, then hashtag these Facebook means nothing and there's actually not a lot of consumer pressure on them to change. But how do we do that? Like what are the tools that other than consumer outrage or bully pulpitting to push Facebook in the direction towards what is probably, it considers its most mortal threat, which is building doors into its walled garden. So I would say, and I trust it's a tough tool to do that. I mean, you'd have to actually find them having substantial market power, which we could find, but then actually doing things that would require that sort of a remedy to say, well, if you need to make this open, I think one of the solutions is more likely legislative or if we were to give the FTC more authority to do rulemaking, it's sort of a rulemaking to require that. And again, I think the important thing is not just helpful to have my pictures and my posts, the most valuable piece is for my friends. When you think about data portability, you also still have to think about privacy. You know, my friends are my friends, but if I'm gonna port it over to some other service, are my friends okay getting pinged by that new service to say, hey, come join it? So I think there are a lot of questions about what does meaningful data portability to another service actually mean and the devil is really gonna be in the details. I think it'd be great to hear from developers and others who are trying to think about what is the next great social platform I wanna create and what would they need to be able to meaningfully port this data or meaningfully get this data, capture this data and then build a service off of that. Well, I mean, what you're really talking about is forcing Facebook to become a common carrier and that's fraught with all sorts of subsidiary problems. And so I'm all in favor of interoperability, but I just don't know how you force it except through legislation. Well, I mean, there is some precedent with AIM, like when AOL bought Time Warner, there was a merger condition around AOL having to make its messenger interoperable with its biggest competitor. At this point, whether Facebook would put itself in a position where it would have to accede to such a demand seems likely. When it buys Amazon and Google. Yeah. When will the authorities have the cameras? That type of camera. I mean, once I did small things, sorry, I was just gonna say maybe it is small parts of Facebook like messenger. Maybe that's not that small, but looking at specific communications aspects of it, like the plug-in or something. I mean, I think it all seems to me that there's also a core tension between privacy and interoperability, right? Yesterday, their announcement to close off more parts of the API, privacy advocates will cheer, but then competition advocates will not. And so how to get both at the same time probably will have to do with more legal and policy solutions rather than any, tweaking of the knob in terms of how much data the API exposed, user data the API exposed or not. I think that you can't just, I think this is gonna require a lot of study if you will be helpful to have research and empirical data. I mean, just like sort of to build up to respond a little bit to what David said about a common carrier. I mean, we really do need to think is that what you wanna do? Because you need to keep in mind that we want to keep, we want to ensure the incentives to build the next great whatever platform or Facebook is going to be. And if you create some sort of policy that says, well, as soon as you get big and have lots of people, then you need to open everything up. Might that, and I don't know, might that chill innovation? Maybe the flip side of that is, well, that's just gonna force every one of these services to just be the best they can and competition is good. So you wanted them to be able to say, you're gonna wanna come, everything's open, but you're gonna wanna come to me because I have the best privacy policy, I have the best whatever ad targeting, whatever it is I can provide, that's great. So it's gonna be an important consideration of what trade-offs might there be for getting certain policies that we think could benefit consumers and what impact that will have on innovation. Well, as that weighing occurs, I'll admit that my greatest fear right now is that this is not only gonna push the decision makers at Facebook, but also the rest of the industry toward, oh, let's just go wallgarden all the way, like it's not worth trying to play in that field. And indeed, I worry that to some extent there's like a brayer rabbit and the brighter patch kind of thing where you're gonna have Facebook going, oh no, please don't make us lock down our APIs even more when really that would serve its competitive position ultimately. There was some ideas around data collaboratives that I think could be interesting for people to look at. The idea that maybe part of its data, I'm not exactly sure how policy would deal with this, but maybe part of some of the big platforms data could be put into data collaboratives or entrusted intermediaries where it could be accessed by researchers. Maybe there are ways to sort of segment some of it so that it doesn't create the problem, at least for certain segments like research. Well, I think there is also a question about like what's the minimum viable amount of data that you actually need in terms of re-creating your social network. It's not really, all these APIs aren't necessarily relevant, it's really more about can I get my basic, can I get my friends list out along with a piece of data for each of them that will allow me to re-identify them. But the most likely piece of data there would be a phone number or an email address and then you get into a privacy problem and they can legitimately say, well, GDPR will let us do that or that might be violating what we've already represented. Well, GDPR is not gonna apply to American data, so. Well, yeah, just that. But anyway, a lot of rich discussion there, not to over-promise too much, but I do believe that OTI will be doing an event on interoperability and portability within the next few months, knock on wood. But in the meantime, we have a few more minutes for some questions, and there are hands shooting up. So, there's a mic going around, please don't speak until you have the mic for the benefit of folks who are watching on Suspender online. Amy Stapanovich from Access Now. David, you brought up the consent decree and spoke about it quite a bit and Michelle, you also brought up the idea of audits. But audits were in this consent decree and they would have had to have gone through at least one, probably two, between then and now. And it clearly didn't do anything to push this forward. Is an audit actually something that we should be pushing for in law? Was it effective in the consent decree? And if it wasn't effective, which it seems not to be, how can you make them better? So, the word audit is the word of many meanings. But the consent decree requires our bi-annual filings by a third party to essentially make sure Facebook is adhering to the commitments it made in the consent decree. So there have been at least two, maybe three of them, submitted to the FTC since the consent decree was filed. They're not public. I haven't seen one in several years. But that's not really what I'm talking about when I talk about audits. I'm talking about Facebook's trying to make sure that third party apps stick to whatever commitments they've made in terms of what they're gonna download and not share with third parties. So one of the real problems with Cambridge Analytic is Facebook didn't really know what Kogan was downloading, nor did Facebook have any means of making sure that data wasn't shared with third parties or sold or done, there was no control and there was no audit. And so what I'm talking about is ways of overseeing third parties who have access to data. And there needs to be some both contractual lockups which don't really exist. And there needs to be some ways of after the fact making sure that the third parties behaved as they promised. At the moment, Facebook has simply been depending on wishful thinking that third party apps are gonna do whatever they, whatever the terms and service are provided. But the Cambridge Analytics debacle has shown there really are no controls. And so when I used the word audit that's what I was trying to refer to. And it really, from what I understand, these are really assessments, which is different. So you have a private assessor who's sort of cataloging benchmarks about the company. And there are ways for companies to game this. They will change their practices right before. For example, to look better for the assessor, they will often not, obviously none of this is public, but they will also not list material changes. Things that would be relevant to a consent decree. That the assessor just doesn't have access to or the company isn't telling them about because of the timing of the assessment. So a formal audit would require it to be public. It would require the company to sign off on material changes of their policy and practices before and after the audit happened. And it would create a sense of accountability that's tied to the consent decree, which hopefully would have more finding authority. Yeah, let me just make one last point. It's not at all clear that Facebook reported what happened with Kogan in Cambridge Analytics to the FTC, even though I think it would have been required in one of the biennial reports. This is Courtney Raj. I'm the Advocacy Director at the Committee to Protect Journalists. And I think it's interesting because in several conversations around the countering violent extremism debate and the fake news debate, one of the big concerns is the lack of access to data by researchers. So it seems like there's this tension between access to all this personal data and yet Facebook and other companies not turning over lists of content that they've removed or censored, et cetera. So I think that we should be careful about making broad brushstrokes about institutional research boards. At least they have to go through something. Definitely they need to be more technologically apt. But I think how can we balance the need for more oversight and auditing potential both by researchers but also potentially by the judiciary as they're deciding what content is illegal or not outside of a judicial process with the need to protect private data? I can take that first. So at least in terms of the recommendation that I was talking about, I was, and it is true. I mean, there's, especially when we're talking about countering violent extremism, how to balance privacy and transparency is a really tough problem. The recommendations I was talking about applied to ads. Applies to ads. And Facebook already considers ads to be all public, right? With the pilot that they're already trying, any post that money has been spent to boost is already considered public and not private information. And so at least in terms of a lot of, I mean, it's not gonna get at all of the problems, but at least for a large fraction of the problems beyond just elections, more transparency about ads will probably do a lot of good without really risking individual privacy that much. I mean, I'll just add, it's not an answer to your question. I think your question is one that we all need to be asking and it's relevant to the portability question and interoperability question as well. How do we balance the need for privacy with all these other things we need? You have Tom Wheeler, former FCC commissioner, writing in the New York Times about how like, we need more open APIs so we can do research on how problematic content is spreading on these networks. Yet at the same time, we have a big push to close them down because of privacy. To be fair to Facebook, if I were them, I'd be like, what the hell do you want us to do? But I think you can look to some things that exist, right? Maybe this is where FIPS makes sense. Maybe it just needs to be caught by FIPS. FIPS, sorry, FIPS makes sense. Oh, sorry, fair information practice principles. So it's just a data governance framework. And most privacy advocates are very well versed in FIPS and it's sort of outdated in some ways but offers a really good way to think about how to govern data well. So for example, when I say obligations on researchers, that doesn't mean restrictions in the terms of, this is a good project or this isn't a good project per se, but it's about privacy and security restrictions and requirements, you know, maybe limiting the amount of data, the scope of the data, the reason you're collecting the data, right? Those types of requirements, which really don't exist right now, but that I think would be a step forward. If we're fast, we have room for two more questions and this gentleman's been raising his hand. Thank you. My name's David Troy and I'm a researcher that's been looking into this Cambridge Analytica situation since over a year ago. So I'm pretty familiar with the internals of it. One of the things that occurred to me that you guys might wanna consider as part of your advocacy around the consent decree and how to sort of, you know, possibly mitigate the situation is kind of a technical solution which is to simply deprecate for Facebook to internally deprecate all of the user ID numbers that have been exposed so far because clearly one of the problems here is that there's kind of a horse left the barn in 2015 issue here. And I think that while, you know, deprecating those user ID numbers that you simply couldn't target those user ID numbers anymore would not totally solve the problem. It would mitigate it in large part. That's a really interesting comment. Was my thinking. It's just something that, if you hadn't been thinking about that, it had occurred to me and it was something that Facebook won't like very much so it'll have a punitive aspect to them, but it won't crush them and it will also not fundamentally disrupt, you know, legitimate ongoing activity from people who are obeying the rules. Thank you very much. Yeah, sure. I'd be happy to talk to you more about that. That'd be great. One more question. Mr. Bob Gelman, I'm a privacy consultant here in Washington, D.C. I have a simple suggestion, thought, whatever, what if as a result of the, whatever FTC investigation comes out, the FTC would require Facebook to divest Instagram or maybe some other application and that would create a barrier, a competition, solve some of the problems and of course there could be other requirements but that would be something that Facebook would see as punitive along the ideas of the last questioner. Could the FTC do that? You'd have to ask Terrell. When I was at the FTC back in the Dark Ages, I don't know whether the commission would have thought of a remedy like that for a deceptive act as opposed to one that impeded competition and I suspect the agency would still have problems doing that now. But you know, it's an interesting idea. Bob always has interesting ideas. It's yet another interesting idea from Bob that I think people should think about. There's, you know, the remedies that the agency has are basically equitable remedies and we do at times force people to do all sorts of things they don't want like occupational bans and so this is an intriguing idea. Well, there will be plenty I'm sure of other intriguing developments ongoing including I hope everyone enjoys or finds interesting the testimony next week which should give us all kinds of new things to do on. Thank you everybody for coming. Thank you to the panelists for a very interesting conversation.