 It's four o'clock, so we're gonna go ahead and get started. Good afternoon, and welcome to New America to those who are here. Welcome to the folks who are watching on the live stream. Welcome to people in the future who are watching the archived video of this event. My name's Kevin Bankson, and I'm the director of the Open Technology Institute, which is New America's internet technology and policy development wing, where we're dedicated to ensuring that everyone has access to communications technologies that are both open and secure, which in many ways is also a core theme of today's event, a deep dive into data portability, how can we enable platform competition and protect privacy at the same time, and perhaps start coming up with shorter titles. Co-sponsored by our allies at Mozilla, who conveniently are also our neighbors in this building. Excuse me, this event is a direct follow-up to conversations that we had during an event in April on possible policy responses to the privacy controversy over Facebook and Cambridge Analytica as it was breaking. And in the wake of that scandal, internet users and policy makers have had a lot of questions on the topic of data portability, and the related concept of interoperability between services. Is my social network data really mine? Can I take my data and my network of friends with me to another platform if I'm unhappy with my current service? What does the new European privacy law demand in terms of my being able to export my data and what are internet companies doing about it? What counts as my data that I should be able to move and what counts as my friends' data that I shouldn't without their permission? Why shouldn't I be able to leverage my Facebook data or communicate with friends on Facebook from another platform, or vice versa? And what the heck is an API anyway? We will tell you an answer to that question. Those are the kinds of questions we intend to ask and try to answer today, and we have a very packed lineup. First off, we will have a brief keynote address from Congressman David Sicilini of Rhode Island. He's a ranking member of the House Judiciary's Anti-Trust Subcommittee, and one of a few key lawmakers who have been publicly pressing on the importance of data portability, most recently including co-authoring a great ad on the topic in Wired. Then OTI's own senior counsel and policy technologist Ross Shulman will be taking the stage to offer a very quick and basic tour of some of the technical concepts and terminology we're gonna be using to discuss this issue on the panel, including those pesky APIs. After that I'll go ahead and introduce our panelists, start that conversation, which is sure to be a lively one considering the mix of perspectives in the room. We'll panel for about an hour, we'll Q&A for about 15 minutes, we'll wrap up around 5.45 and transition to our reception. So thank you again for all of you coming and watching, and without further ado I'd like to introduce Congressman Cicillini and thank him very much for joining us. Good afternoon, thank you very much. I'd like to thank Kevin Bankston for inviting me to speak at today's important and very timely event, as well as for his commitment to a deep discussion of platform competition. I'd also like to thank our esteemed panel of experts, you're in for a real treat, including former Democratic FTC Commissioner, Tara McSweeney, who is also participating today. She is one of our country's great champions of consumer protection and competition, and I'm really happy to see her committed to continuing her work on this issue, even in her personal capacity, so thank you for being here. 25 years ago, CERN, the European Organization for Nuclear Research, announced that it would release software into the public domain to provide a global information network to promote compatibility, common practices, and standards in networking and computer supported collaboration. This software, the World Wide Web, was invented by British physicist Tim Berners-Lee to connect scientists across universities and research institutions. It included everything from discussions of technical design notes and documentation to news and educational materials. As Berners-Lee said at the time, the invention came from the realization that if everyone had the same information as me, my life would be easier. But as we all know, it didn't just make people's lives easier. The meteoric growth of the decentralized web was a revolution that changed our lives, our work, our businesses, and our entire world. Millions of good-paying jobs were created and greater access to information promised to renewal of our democracy. Within a few years, search and browsing services were built onto the software to give people tools to communicate, share, and explore information online. And unlike other walled gardens of the early internet, like America Online and CompuServe, the web was a decentralized platform that was open and free to everyone. But today, these radical principles of openness and an even playing field have been all but abandoned and replaced by dominant platforms designed to harvest consumers' attention and data. The walled gardens of today are not just breaking the open internet, they're breaking our democracy. As Tim Berners-Lee warned in an open letter earlier this year, and I quote, what was once a rich selection of blogs and websites has been compressed under the powerful weight of a few dominant platforms. This concentration of power creates a new set of gatekeepers, allowing a handful of platforms to control which ideas and opinions are seen and shared. What's more, the fact that power is concentrated among so few companies has made it possible to weaponize the web at scale, end quote. We have since learned that Cambridge Analytica, a foreign political consulting firm, harvested the personal information of approximately 87 million Facebook users to create Steve Bannon's psychological warfare tool. And just this week, The New York Times has reported that Facebook gave 60 mobile device makers direct access to people's data without their permission, including Chinese companies that have been identified by our intelligence community as posing serious security risks. These reports raised serious questions about whether Mark Zuckerberg misled Congress when he said that people had an, I quote, complete control over their data and that Facebook had ended its practice of sharing friends data with third parties without their permission. It's clear that we've reached a tipping point. An apology campaign and hollow promises are no substitute for meaningful efforts to protect user rights and strengthen consumer protections. We need pro-competitive policies that give power back to Americans in the form of more rights and greater control over their data. This starts by taking on walled gardens that block startups and other competitors from entering the market through high switching costs. This is not a new concept. Over the past century, removing barriers to entry and lowering switching costs has been a defining characteristic of promoting competition and communication networks. Before World War II, the telephone industry was so concentrated it was considered to be a lawful monopoly. By 1980, AT&T was the largest corporation in the world. It controlled more than 80% of the telephone market, earned more than $53 billion in annual revenue and was the second largest employer in the United States behind only the federal government. But in 1982, the Justice Department successfully concluded its case against AT&T for blocking competition in the telephone service and equipment market, resulting in a consent decree that split its long distance and manufacturing services from local telephone companies. The importance of this case cannot be overstated. Prior to the AT&T consent decree, it was widely assumed that the telephone was a natural monopoly that could only be regulated because it was immune to competition. Before the old bell system was dismantled, there was widespread uncertainty about whether competition and communications markets would result in more innovation, better rates, and higher quality service. Congressman Jack Brooks, the Democratic chairman of the House Judiciary Committee, later stated that, and I quote, there was a discernible nostalgia among some observers for the golden age. When things were simple and unified, or should I say simple and monopolized, end quote. Yet robust anti-monopoly enforcement facilitated an explosion of competition in long distance markets, significantly lowering prices, improving products, and spurring the creation of new jobs. What's more, the introduction of competition in long distance markets also led to the deployment of fiber throughout the country, building much of the internet's backbone. But in spite of these improvements, local telephone companies maintained their monopoly power and high prices through a number of barriers to competition. One of the most effective of these was simply requiring customers to get a new phone number whenever they switched telephone providers. According to a survey, more than 80% of customers reported that they were unlikely to change their local telephone provider if it meant losing their existing telephone number. Local businesses also hesitated to switch to a competitor because a new number also entailed steep costs of telling their customers about a new phone number along with the potential risk of lost customers. People were locked into their telephone company simply because they couldn't take their phone number to a new carrier offering a better deal. But this high switching cost didn't just harm consumers, it also blocked rivals from entering the market and competing through better service or lower prices. One rural telephone company noted that 85% of potential customers immediately lost interest in switching to their service once they learned that they would need to change their phone numbers. In response to this problem, Congress included number portability requirements in the Telecommunications Act of 1996 to open local telephone markets to competition. This simple remedy, something that Tim will recently refer to as arguably the most successful effort to isolate and reduce switching costs through regulation, saved consumers billions of dollars every year and incentivized Maverick competitors to offer better services and more choice. The principle of portability readily applies to dominant social media platforms like Facebook. Like telephone networks, the value of Facebook and other social networks depends on how many people use the service. But unlike telephone networks, the absence of portability requirements has made Facebook an information gatekeeper online, giving it substantial and durable control of consumers' data and attention. 54% of American adults get their news from Facebook or one of the companies that it has acquired, Instagram and WhatsApp. Facebook also accounts for 75% of mobile, social media traffic, 30% of all online traffic in more than a fifth of all online advertising revenue. People who may want to leave Facebook are less likely to do so if they aren't able to seamlessly rebuild their networks of contacts, photos and other social graph data on a competing service or communicate across services. And we know that giving people meaningful rights to export their social network would promote competition, not only because it worked for telephone networks, but also because it's worked more recently for social networks. Before it was acquired by Facebook, Instagram scaled from 5 million users in 2011 to 80 million users less than a year later as a direct result of open APIs that allowed Facebook and Twitter users to import their networks into the new competing service. And today you can already safely use your Facebook account to import some of your social graph info into services that Facebook doesn't directly compete with, such as Spotify. Reducing barriers to entry online will encourage social media platforms to compete on providing better privacy control and rights for customers while forcing Facebook to compete on the merits of its service and trust rather than simply locking consumers into their network. But it's not enough to simply give people more meaningful control over their data and then walk away. First, rights to data portability must also include appropriate guardrails to protect people's privacy and security, a topic that I anticipate will be explored by our panel. Second, we need comprehensive privacy reform to ensure that people have meaningful control and knowledge over how companies use their information online. And third, portability is not a substitute for the robust enforcement of antitrust and consumer protection laws. Benefits of data portability in social networks will be lost if we continue to allow dominant platforms to perpetuate their stranglehold over commerce through serial acquisitions of potential and future competitors. Even though a number of portability is an enduring legacy of the Telecom Act, many of the other pro-competitive benefits of this law were lost in the wreckage of consolidation. As Gene Kimmelman, Mark Cooper, and Magda Herrera have pointed out, enforces relaxed ownership limitations and mergers that were approved on the basis of theoretical and potential competition that never materialized, eliminated direct competition and weakened to the acts pro-competitive goals. In a similar vein, the ever-greening of dominant platforms through serial acquisitions of competitive threats must come to an end. As Carl Shapiro has explained, applying tougher standards for mergers that may lessen competition in the future, as happens among technology companies like Facebook and Google, is a promising way to tighten merger enforcement. And to the extent that acquisitions do not stifle competition, enforcers should explore remedies that are designed to preserve potential competition and promote openness online, as Chris Riley and others have recommended, as the antitrust agencies have done in the past. There is also considerable need for Congress to strengthen and modernize anti-monopoly enforcement by addressing the limitations placed in the Sherman Act by the courts. The combination of these pro-competitive proposals, something Louis Brandeis referred to as regulated competition, is essential to restoring the internet's decentralized, open, and free structure. Before closing, I want to make one final point. Reviving competition among platforms will require more than just good ideas. It will require real courage and intellectual independence to see these ideas through. 20 years ago, the Justice Department and a coalition of state attorneys general, including my colleague Senator Blumenthal, successfully sued Microsoft for harming innovation, competition, and consumers through its monopolization of computer operating systems. As Senator Blumenthal recently noted, challenging the anti-competitive conduct of Microsoft, a well-liked company was not a popular decision. It took courage and creativity by lawyers who were willing to challenge entrenched ideas. This case opened the personal computing markets to innovation and new technologies in the same way so that the AT&T consent decree enabled fiber broadband development by opening long distance networks to competition. Neither case was safe, easy, or popular, but that is the price of openness and progress. Thank you, and I look forward to the panel's discussion. Thank you, Congressman, so much for your comments today. We're glad that you could be with us to offer your thoughts on an important topic. Good afternoon. My name is Ross Shulman. I'm a senior counsel and a senior policy technologist at OTI, and I wanted to take a few minutes here to go over some key technical concepts before our panel gets into the weeds. So we're here today to talk about data portability. It's been a big discussion, particularly in the last few months or so. In the wake of both the Cambridge Analytica scandals, users asking how they can go about moving their data. Do they actually have control over their data or not? But it's also because of the new EU privacy law. I don't know if anybody's heard about it. You might have gotten an email. The General Data Protection Regulation, GDPR, in addition to all of its privacy requirements, also requires companies to offer data portability to the data subjects. But what exactly is data portability? So paraphrasing the GDPR, this is not the exact words, but basically it's the ability of a user of an online service to extract an archive of the data they've provided to or stored with that service in a structured, commonly used, machine-readable format suitable for transfer to a different service of that person's choosing. So today, for example, Google Takeout, which is the Google service for portability, gives you the ability to select which Google services you want to export data from, and lets you choose what format to receive them in. So here, just for an example, I'm trying to export my contacts, and I can get them either in the CSV format or in a vCard format. And I'll talk more about formats in just a minute. Twitter, meanwhile, just sort of gives you one button, and you download your entire archive of tweets in one go. The archive, if you can read the sort of text file that comes with it on the bottom there, gives you both a human-readable version, which is an HTML file that contains basically all of your tweets in it. It looks like Twitter. But it also delivers you at the same time copies of those tweets in two different machine-readable formats, CSV and JSON. Facebook's process is similar to Google's. Again, you can select sort of what information it is you want to download, and then what format you want it in as well. It's worth noting that all of these processes, with the possible exception, because I have Fitz sitting in the front row here of Google's, are today a whole lot more robust than they were even six to eight weeks ago, which I think shows you the impetus that the GDPR has put under these companies to do a little bit of a better job here. Meanwhile, Google, Microsoft, and some other contributors are working on an open-source project called the Data Transfer Project, trying to develop a simple common interface for moving files directly between services, moving data. For example, this is a demo screenshot. It's a mock-up. It's not actually a working prototype yet. But you can see here a user is trying to export their photos directly into Microsoft's One Drive from Google Photos. And in the middle here, there's this sort of permission step where the user is giving both services the permission to transfer that data. So I've referenced a few times in the last couple of slides a key feature of effective portability, which is a common machine-readable format. So what exactly do I mean by that? Well, so a machine-readable format is a file format. It's basically how the data is actually structured in a file when you download it. Preferably, based on an open and widely used standard, that structures the data in such a way as to be easily parsable and modifiable by a range of different computer systems, thereby making it easy to move data between different services. So a couple of examples of commonly used formats include JSON and XML, which are both just text files, but they're text files in a given a very specific structure. So for example, until recently, Facebook, when you downloaded your data tool, it only gave you the HTML web page that I was telling you about. And that's great if you want to just go back and read your old Facebook posts. But it's not so great if you want to try to feed that into a new service. About a month ago, it also began offering JSON, presumably, again, part of its GBR compliance program. And this is just an example from my own downloaded data that I pulled yesterday. And it doesn't look like a whole lot to you and I, but this is actually a posting of a picture on my timeline, presumably of my son, because that's basically the only thing I put on my document. It's important to note that when I download my Facebook right now, that's strictly my data. So the content is what I have posted to Facebook. It doesn't include, for example, photos or other posts in which my friends have tagged me. And it doesn't include all of my friends' contact information such that I could easily reconnect with them on a different service, unless my friend has gone through this convoluted settings process to check a little box somewhere in the depths of their service. This isn't just Facebook's problem, though. Similarly, Twitter also lets you export your tweets you authored, but not your mentions and, again, your likes or retweets or the email addresses of the people you follow. The panel's gonna dig into this, but obviously there are good privacy arguments for why some of that may be the case. Again, it also raises competition concerns, though. So back to machine-readable formats for a moment. ActivityStream is one example of an open standard for social media activity that uses the same JSON basis as the Facebook export, but does it in a way that is widely publicized and widely used? It defines a format for storing items, such as posts, likes, comments, et cetera, in a stream similar to what you would get from, say, a Facebook feed or a Twitter feed. So this example, for example, that was awkward, shows what we might call a follow option, right? And so in this case, Ryan Kernigh-Hann here is following someone named Ken, and this is clips because there's not enough room on the slide to go through the whole thing, but you get sort of the idea of it, right? We call it an open standard because it was developed at the World Wide Web Consortium and anyone can use it. Ironically, none of the major commercial software networks are offering their downloads using it, which is funny because some of them participated in developing the standard about 10 years ago. Actually, it's more used by open source, decentralized alternatives like the social network mastodon that you may have heard of. It's just one example of the kinds of alternatives that might be able to grow and compete with widespread data portability. So what do I mean when I mean an internet technology is decentralized? Well, it's a technology that relies on open standards, such that users can make use of the technology and communicate with others using the technology without having to rely on a single service provider who owns the whole platform. Email and the web and once upon a time and maybe once again in the future, instant messaging all were decentralized technologies. So those technologies were based on open standards and so anyone can run an email server that talks to other email servers or send and receive emails from someone using a different email service. Anyone could run a web server that serves content to any web browser and can link to content on any other site. Anyone can also run a mastodon server which is a decentralized social network a lot like Twitter that you may have heard of. And any mastodon server that hosts users and other servers can easily talk to anyone else on another mastodon server. In other words, decentralized technologies are easily interoperable. So what's interoperability? We're going down the rabbit hole here, aren't we? Interoperability is the ability of different computer systems or software to exchange and make use of information across systems in an ongoing way. So if we think of portability as a one-time sort of copying your data, think of interoperability as the ongoing ability to interact across services. In open systems, that's pretty straightforward. It's very easy for one website to pull data from or link to another website. It's very easy for you to email across servers. Back when a lot of our instant messaging activity was based on the open XMPP standard, you could chat across different chat services including Google Chat, Microsoft Messenger, along with many other independent XMPP servers. However, when it comes to closed platforms like Facebook that have been built on top of the open system on the internet, what some folks might call walled gardens, interoperability, when it exists, it's typically accomplished through what we call application programming interfaces or APIs. This is the last definition, I promise. APIs are interfaces between different software applications allowing them to talk to each other and exchange data in a specifically defined way. So what does that specifically defined way mean? Well, APIs are basically predefined ways that allow software to talk to each other, allows one piece of software to allow others to interact with it. They can be completely private or completely open or anywhere in between. They're often used in open systems with almost no restrictions. For example, there are many weather APIs on the internet that take in a zip code and feed you back a weather report or the temperature or something like that. However, they're also used to allow and regulate access to data and users in closed systems. So we can sort of think of them as windows or doors into the walled gardens. For example, many Google systems or services have APIs. They say allow access to calendar items, for example. But of course, they're closely guarded behind authentication mechanisms. Twitter's API provides a means to search tweets, post new tweets and even manage advertising campaigns. And Facebook has APIs that the Facebook apps and connected websites can use to access data about Facebook users. Indeed, it was Graph 1.0, the version of the Facebook platform API that was in use before 2015, which allowed apps to not only obtain data about their users, but the friends of users and I led to the Cambridge Analytica controversy in the first place, which puts a fine point on the core question of today's panel. How, if at all, can we ensure enough portability and interoperability to promote competition and innovation and avoid locking in the dominance of the existing platforms while we also adequately protect privacy? So with that, let's turn to our panel and begin that conversation. I'm sure they have all the answers. Yeah, they're very easy questions. Thank you so much, Ross, that was super helpful and thank you, Congressman, again, for your illuminating comments. I'd like to invite our panel to come ahead and join us up here and we'll get moving. After I introduce you. Fabulous, all right. So in order of alphabet and last name, we have Brian Fitzpatrick, also known as Fitz. I will refer to you as Fitz if you don't mind. He's the founder and currently the CEO of Talk Incorporated, but in a previous life, Fitz founded and led the Data Liberation Front, which is the engineering team behind Google's data portability work. You've also already heard mention of Terrell McSweeney, who recently stepped down from her role as an FTC commissioner and co-authored with the congressmen that wired up that I mentioned. She was actually on stage for that last Facebook event where I first publicly suggested that we'd be following up with this event. So I'm glad you could join us and I'm glad we could actually pull it off. I was a little worried that we weren't gonna make good on that promise, but we have. Then we have Neha Narula. She's the director of digital currency at MIT's Media Lab and the co-author of an absolutely fascinating report on decentralized web technologies, which Ross just talked about. You can just Google or Bing or Duck Duck Go for decentralized web and it'll be in your top results. We have, next we have Chris Riley. He's policy director at Mozilla Corp, where he's working to promote overall internet health, including pushing for a more open, decentralized and competitive web environment. So thank you for joining us, Chris, and thank you for co-sponsoring this event, including paying for the drinks we'll all be enjoying after the panel. Finally, we have Steve Satterfield, privacy and public policy director at Facebook here in DC. At this point, it would be appropriate for me to note that Facebook is one of the many different financial supporters of OTI's work. Side note, you can find details about all of New America's grants and gifts, including specific amounts on our website. Facebook's a supporter because on specific certain issues that we believe are critical to internet freedom, and most especially when it comes to opposing policies that would undermine users' rights to use strong encryption, we are allies. Of course, that doesn't mean we don't often disagree with them on other issues, including on issues of consumer privacy, and indeed I expect there may be some interesting disagreements today, especially in the wake of the latest New York Times stories about Facebook's data sharing agreements with manufacturers, but that means I'm all the more grateful that Steve was willing to engage in a constructive public dialogue today, so thank you for being here, thank all of you for being here, and enough from me, I'm gonna start by asking you each a question, but folks should feel free to chime in as they feel inspired to share additional perspectives, and hopefully it'll be fairly conversational as we move on. But Chris, I'm gonna start with you. Very basically, why and how do portability and interoperability play into Mozilla's vision of a healthy internet, and what are your hopes and dreams and fears on that score? Well, thanks for that shout out to our internet health report, which we released the second version of in February of this year. One of the great things about working at Mozilla is the policy physicians and strategies that we take, we take in order to make the internet better rather than to advance our own business interests because we have the blessing and the privilege of being a nonprofit at the top of the organizational stack. So we have broken internet health down into a few different issues, one of those is decentralization. It's the idea that to make the internet open, it has to be distributed and competitive and sufficiently robust to allow new ideas, new entrepreneurs, new innovators to come in and build independent businesses. So just to take a brief step back, it's sort of trite now to say there's something different about the internet. I think everybody in this room would agree to that. Let me take that one step further and say there's something different about the digital platform world that we have today. Many, many very good things come out of that, but how to promote decentralization in a world where digital platforms play such an important, fundamental role, I think is a different question than what we've faced in the past. So my nightmare scenario and the thing that got me interested in this issue and willing to spend quite a lot of my personal time on it, professionally personal I should say, is the idea that this story dorm room garage internet history that we have was predicated on the ability of competing services to communicate with each other and the ability to bootstrap the acquisition of users by partnering with others. And as we head towards more and more vertically integrated silos of technologies, will they still be offering the kinds of APIs to create hooks for third parties to come in and communicate with them and exchange data? I'm really worried that they won't. And I don't want an internet where I'm choosing among the full scale experiences offered by a few companies and not able to pick and choose the services, the devices, the other nice applications and features that I can do today. This is the history of the internet as we know. Interoperability as provided via APIs is how the internet has become so awesome in the past few years. So when I am defending the principle of interoperability here, I'm not talking about a change. I'm talking about taking that key that has unlocked the power of the digital platform economy and keeping it effective for years to come. Cool. Well, the dream of the 90s as it were. One of the ways that people are trying to keep hope alive in regard to decentralized tech is actually developing open source decentralized tech projects, which is the subject of Neha's awesome paper. And the conclusion of your survey of the world of decentralized tech was that it was gonna be really hard for any of them to get adoption at scale unless there was meaningful portability and interoperability or in some weird cases that you might wanna talk about where there's a specific community that wants its own little place to be. But can you talk more about those conclusions and where they lead you in this discussion? Yeah, definitely. Thank you for having me. So I think the paper that we wrote was written from a position where there were a lot of people who were calling for this re-decentralization of the internet and who were saying that these new decentralized technologies, many of them inspired by cryptocurrencies and platforms like Bitcoin were going to be the solutions to our problems. And these are very rich and complex problems around algorithmic bias and around choice and around user data and privacy and protection and interoperability. So we were very skeptical about this claim and really interested in kind of diving into this and figuring out what did make sense and what didn't make sense. And I think based on our survey, we found that there are quite a few really core problems that need to be solved before the decentralized technology can really reach its potential. Many of those have to do with things like usability. A lot of this technology relies on users to manage very sensitive information and to use sort of different forms of authentication that they might not be familiar with and it's very complex and we still don't have really good UIs for how to do that. And then a lot of it also had to do with business models. Decentralized technology means there is no company, there is no big business that is profiting from the technology platform. And so you really have to start thinking about different business models. The decentralized technology of the internet came out of universities and came out of government funded research. What we have right now is quite different. And so at the conclusion of our report we really started and this happened sort of before, I think we finished writing this before the summer of 2017. So it was before sort of the plethora of initial coin offerings that are happening and tokens and things like that. But we really wanted to see if this type of technology tokens could incentivize a different type of business model for the web. Because right now we essentially pay for services with our data and with our attention. And I think until that fundamentally changes it's gonna be difficult to keep these large players from developing who hoard user data. I'm gonna turn to Terrell who again co-authored an op-ed on this issue. And I wanted to get your perspective on the role of portability and interoperability in competition as a former FTC commissioner. Sure. Well thanks so much for having me and I am a former FTC commissioner so I'm just giving you my own personal views, take them for whatever they're worth. But I really enjoyed co-authoring the op-ed with the congressman so thank you for the opportunity. I mean I think it's pretty basic. Data is incredibly powerful, it's incredibly important. It's value in the market and depends entirely on how it's being used, right? It can be a variety of different things but you need it to be providing the high tech services or platform or whatever. And so it's absolutely essential to have the data be able to move and the users and the demand side of the market be able to move their data around in order to have a competition. Now I am mindful of the limitations of competition as a market force and addressing all of the issues and problems that have been raised here, right? I mean there's the problem of the way these markets work, there's the problem of privacy, there's the problem of security, there's the problem of disinformation, there's the problem of access to information, there's the problem of gatekeepers to information and a range of weaponization of information, there's a lot of problems. And I think we have to be both sanguine but mindful of the fact that competition is not going to solve all of those problems. That we do need some additional tools to be brought to bear and one of the things that we need to be thinking about here is how do we identify more specifically the problems that need to be solved? Clearly one of the things we do know already though is allowing users more control to their information, more ability to move it around and more information about how their information is being used and who has it and when and access to it is probably gonna be helpful. So yeah, there is an aspect of not just portability of your own data but being able to access more data about your data and how it's moving around. I'm curious, the congressman brought up and you all mentioned in your op-ed, you're analogized to phone number portability and the congress in the telecom act and then the FCC making rules on that point. Do you foresee or imagine something similar in terms of congress in the FTC or some other rule maker? Do you think that's a possibility, a likelihood? Where do you think the rubber might hit the road in terms of policy in DC on this issue? Well, I defer to the congressman about what laws he's willing to write. He's off the hearts of you now. But look, I think certainly a model here could be, you could see a rule for an agency like the Federal Trade Commission, my former agency, in thinking about how to put in place the rules for the road around allowing people access to their information. We see already the impact of the GDPR in improving some of the options that are available to consumers. So I think there's definitely an argument to be made that that is one way to go about it. The FTC also has an enforcement toolkit. It could be bringing to bear on some of these problems. And of course, let's not undermine the positive progress we're seeing that is essentially either response to GDPR or industry self-regulation from industry themselves. So I think it's a combination of all of these different mechanisms. Yeah, certainly there's a lot that companies could do straight away. And clearly are doing, right? I mean, I think the question that we are really wrestling here with here, which is a really important one, is how to make that even more meaningful for consumers. Yeah. So one interesting disjunction between number portability and the portability we're talking about here, especially if we're talking about being able to export your social network is with number portability, it was your phone number. But one of the issues that keeps coming up is how can I export my friend network in a way that I could re-identify them elsewhere? But to the extent that it creates their data, that raises some privacy concerns, which I know has also been the subject was the subject of some controversy between Google and Facebook during your tenure at data liberation front. So I definitely want to come to that issue, but let's back up Fitz as one of the earliest proponents of portability as a concept and one of the first engineers to focus on it. Where do you see, where are we coming from and where are we going? What do you think are the opportunities and challenges here? What do you think are the priorities? What do you want to see happen in this area? Well, I think it's most important that users have control over their data. That's sort of the foundational things. And we started with data portability as a means of exporting your data. And I say export and I differentiate that from moving because I want to appeal part copying and deletion. Those are both very important. I'd say fundamental rights for any of the data that you've created or input into a system. But I think that that's the very first thing. And we started this because it was very much true. I was very much against lock-in. And again, I'm a former Google employee. I'm speaking from my personal opinions here. But a lot of that did guide what I did at Google with the creation of the data liberation front team, the creation of Google takeout. That there's a lot of reasons that people should keep using a product. And I didn't think that data lock-in should be one of them. And so it expands greatly beyond that. There's a lot of issues that you noted around privacy and other people's data. But I will point out that it gets very complex very quickly. You're talking about a lot of problems. There's a lot of problems that existing legislation and GDPR tried to do to some of this, but that companies don't even handle very well. And that is that there's data that you create collaboratively with other people. If I create a doc and I share it with you and you write some of it, there's this issue. And we could go on for hours about this, but I think most importantly, it's to think about what does it take for me to take my toys, so to speak, and go somewhere else and be able to use that data to use another product, to use a competing product. So that is sort of the foundational, most important elements of that, I would say. So one issue you raise is the line drawing question of what should I be able to get out versus what should I not? Or should I have to ask permission? How do we handle collaborative documents? How do we handle data about my friends? Do you have any instincts about where we should draw those lines? It's, I do have a lot of feelings, but the first thing that's most important to acknowledge, and I don't think I just talked about it a lot, is when you can see someone else's data, whether it's Facebook or whatever the case might be, you have access to that. You can screenshot it, you can take a picture with your camera, you can copy and paste it into the document. I can sit there for four hours and copy all the personal information, whether it's whether they're married or single sexual orientation, whatever, of every friend that I have on Facebook as an example. It's gonna take me 10, 12 hours to do it for 1,000 people, but it's doable. And so data portability comes around how easy is it, how fast can I do that? If I have 5,000 photos, do I have to click download on each photo or can I go and click a button that gives me all of my photos? And so, at one point when we first started this, prior to the Cambridge Analytica era, I would say, I argued that to some extent, you should be able to click a button and get everything that you can see, everything you have access to, but that comes down, in some cases, boiling the ocean because I may be able to see thousands and thousands of documents shared with me that I don't really care about. So if we're gonna back that up, I would argue that you need some type of pointer to the information that isn't yours that you have access to, some type of globally unique identifier that will allow me to reconstruct that whether it's a URL or an email address or something, but that's sort of table stakes for making this data, for retaining the utility of this data versus just my stuff, which is an example of a social network or something collaboratively created is not particularly useful. So we're gonna definitely come back in a variety of ways to this question of the graph, like getting your social graph out and what that could or should look like. I mean, although I think it's interesting, one of your line drawing suggestions pre-created analytic, I suppose, was sort of like if you can see it, you should be able to export it, yet that's exactly, I think, the reasoning that Facebook used when it defaulted to your friends can export data about you because you assumed the risk that your friends could move your stuff around and that's what now everyone is reacting against and so that, I think, puts a really fine point on, like wait, what do we actually think is best for the consumer here? So I'm gonna come to you, Steve. Just starting with the basic questions of first on portability, what has been Facebook's approach and how has it evolved and what do you think GDPR doesn't require you to do there? And then we'll get to the, I think, Harrier issue around APIs and interoperability through the APIs in terms of apps being able to leverage data off of Facebook. So let's just start with portability. Great, and thanks Kevin and thanks Ross for putting this together. It was great to see the congressman. I'm a former Providence resident. So it's great to be here, everybody. So what we've been talking about at Facebook and with this community, with you all the last couple of months, was essentially a data portability solution that we built in 2007, which is the Facebook platform. And as you know, the Facebook platform is the way that third party developers can integrate with Facebook and after asking a person's permission can get access to that person's information that the person has on Facebook. So much of the conversation has been about essentially data portability and it's a very timely panel for that reason. Most of the actions that we have taken in the aftermath of Cambridge Analytica have been to restrict the data that developers can take out, or can get permission to take out of Facebook. And so what seems to be the trend I think externally for folks is that Facebook is limiting access to people's data. But that's an incomplete picture of the kinds of things that we're doing at Facebook right now. So even as we have taken a hard look at the information that we make available through APIs and have made some tough decisions and made some changes to the way Facebook platform works, we've been simultaneously developing and improving tools that can help people export their data from Facebook and Ross showed you one, which is download your information which is a service that's been around for a long time actually, but got a pretty serious makeover in the last couple of months in connection with GDPR, but also as part of a general effort to improve that tool which needed improving. So we are simultaneously taking a hard look at the way the platform operates and also trying to improve the other tools that we give to help people take their data out of Facebook and we think these are complementary efforts. So on platform, what we are doing is we have limited the data that a developer can ask a person to get without going through an app review process where we look at the permissions that the developer is seeking and ask whether those make sense in light of the functionality provided in the app, the value to people and that kind of thing. So at this point, a developer can get without going through app review just a person's name, a profile picture and an email address and the person always has the choice not to provide the email address. Developers can still ask for additional permissions, they just have to go through our app review process although we have limited the number of types of permissions that developers can ask for. We're also, as you know, reviewing apps that had access to the information that was provided through Graph API which Ross discussed which was the large amount of data that a person used to be able to take out of Facebook using a Facebook platform. We're reviewing those apps, we're looking for signs of misuse where we see misuse, we'll let people know and we'll take action against the developer. But all of this work on platform is complimented by the development of download your information and by another tool access your information which is designed to make it easier to see and potentially take with you the data that you have provided to Facebook. So we have complimentary efforts going on. I think that in this conversation a lot of the focus has been on platform. I'd like to talk a lot more about download your information today and including the recent development that Ross talked about which is the JSON format that you can now take your data out using. Yeah, well, I think we'll continue to talk about both but let's focus on download your data for a moment. We appreciated the adoption of JSON. Some folks have been suggesting, Ross pointed to it, activity stream as a standard around newsfeed type items. But since we're talking about download your data let's talk about the contact issue. Sure. I mean, I think some of the history here is probably gonna be helpful perhaps from FITS but right now when you download your data you get a list of your friends' names and the date that you connected with them but you don't get their contact information unless as Ross said you have gone into your settings. I think it's right now in your contact information settings not in your privacy settings and elected to have that data be exportable. One of the, let's be frank, few privacy settings that default closed on Facebook. So there's a privacy argument for why you can't disclose that information through download your data but also there's a competition concern there because if we don't know how to, like if we don't know how to re-authenticate and re-identify our friends elsewhere and figure out which John Smith is our John Smith, for example, that makes portability very hard. It eliminates one of the most valuable pieces of data. So FITS, I was wondering if you could first talk about how you all approached that issue with data liberation front and perhaps give a bit of background on the bit of a spat that Facebook and Google had on that issue. Sure, that's definitely set the way back machine I think to 2010 at that point. I mean, the relative product in Google's case would have been Gmail, for example, your contacts. You've always been able to export your contacts. Now, it's very clear that Gmail is an interesting case. It works by copying things from one place to another. So if I send you an email, you get my email address, you get an entire copy of the email. If I delete that email, you still have your copy of it. I think people understand that because it's very analogous to the way that postal mail works. And so your contacts are very important to you and this was something that a number of companies used to bootstrap social networks. Facebook used it and by basically pulling out your Gmail contacts and stuffing them in, a company could target emails to all of your friends and say, hey, I've just joined Facebook. You can go ahead and join me, that sort of thing. And the challenge with that is there was no reciprocation. So this was an open ability that Google had at the time where you could actually pull that data out through APIs and we didn't make changes around reciprocation there. That if you're gonna use this API, that's great, but you can't sort of keep all that data for yourself. And so that was, I think, a big challenge in so far as then suddenly that's locked up. The only way at the time, if I saw someone, someone's email addresses in Facebook were visible by default. You may remember this if you're as old as I am, but the email address looked like text but it was actually a graphic. You couldn't actually select it and copy and paste it. You'd have to look at it and type it into a form somewhere else. And certainly there's reasons for that sort of thing and at some point that turned into a Facebook email address which went to your Facebook messages. And then beyond that it became, the default is off. The default is that's your information. And my take on this is that defaults matter a great deal and if I can export, you can pick a Google product, if I can export my photos but I don't get any of the file names or something like that or if I don't get any of the comments, everything matters and data as well as data about your data. And so that resulted in some changes to Google's terms of service that had to do with reciprocation. If you want to use our APIs, great. You're gonna have to reciprocate. We also made some changes at the time to make sure when people were exporting their data, hey, think about where you're putting this data. But at no time did we restrict people from taking that data. And again the core thing for me isn't whether a third party person is using that which I think is great, it's a higher level thing but whether I as a user can get a copy of my contacts, my email, my photos. So I think it's a little bit of history but that's quite a while ago. Yeah, so we have this issue, we have policy makers asking for this, we have op-eds in the New York Times asking for this. I guess my question for you is, is Facebook investigating ways that it could give us our graph in a meaningful way while also balancing its users' privacy? Do y'all have bright ideas on that? Does anyone else on the panel have bright ideas on that? I think there's a concern that GDPR might potentially get in the way because it requires opt-in consent for certain things. But then again, maybe there's a way you could ask all your friends to opt-in to let you move stuff. I don't know, I'm curious what others think. Yeah, I mean, do we have bright ideas? I mean, I think one of the reasons why I'm here is to listen to you folks about this because what we've heard loud and clear over the last few months is that people have a lot of concerns about friends' data being shared by the person using Facebook. We, as you all know, very likely, we enabled this kind of sharing subject to an opt-out control that existed in Facebook settings for years. We heard loud and clear that people had concerns about a user's ability to take their friends' data with them and share that with the developer. And so we shut that off, I guess three years ago now. And I think that in retrospect, that was the right decision. Maybe we didn't do it soon enough. We understand the interest and interoperability and the desire to control your data, including your friend connections and the desire to reconstitute your graph and other services. But right now we're prioritizing privacy. And so the solution that we have for downloading your information is what you described, Kevin. You can, as a friend of a person, elect to have your email address included in the download, which can of course help reconnect with that person on another app or in another service. The other thing I would add is that we haven't restricted developers' access to friend data completely. A developer still can get access to friend data if that friend is also using the app. In other words, we can help the developer understand the connection by saying, by returning a list of people that we know are also using that developer's app. And so that's where we've drawn the line for now. I think we've heard very loud and clear that there are privacy concerns around being able to take your friend's data with you. We are absolutely open to new ideas about where the line should be drawn. But that's where we are right now. Panelists, any ideas on where that line should be drawn or bright ideas that we wanna... I don't know if it's a bright idea, but I do think it's important to be really specific in this conversation about what we're talking about, which is to say, I think American consumers, loudly and very clearly, have registered surprise over their lack of control over their data and who has it, and when they can say yes and when they can say no. So that seems clear. So we should be reacting to that data point, I think, and it's good to hear the conversation about that reaction. Because what we're talking about essentially is putting users in the driver's seat in a more meaningful way than they have been. I'm willing to experiment with that and see where that goes. Why not have opt-in consent over moving the social graph around? Let's see if we can get a tool whereby it'd be clear for me to communicate that I was wanting to get your permission to get your information because I wanted to move my graph someplace, right? I don't know that we've actually seen that experiment play out fully and clearly, and I think it would be interesting to start there with a meaningful mechanism that allowed people to ask that question of others in their network and see what their responses were. So like in all those notifications, maybe I'll get one that's like some of your friends would like you to turn this setting on so they can have you participate or reach out to you on another network or something like that, that would be cool. I don't know, I'm trying not to opine too much as moderator, but I will opine a bit. I mean, one idea that's come up in writing by particularly Josh Constine at TechCrunch is the idea of some sort of cryptographically authenticated unique identifier, perhaps based on the UUID, the user ID, which is a piece of public information. You can see it on any page on Facebook that you can have access to. Such that you could then use that to authenticate relationships with other friends coming from Facebook on another service. That wouldn't implicate the phone number or the email address. So that seems like something that is possible, although I'm not a technologist who would be able to build that myself. But does that sound like a possibility? James, what are you thinking? I think we need some type, I will say it is convenient that email addresses that were used to bootstrap Facebook are now considered private data. It's convenient for people who wanna stay at Facebook. It's convenient for the company as well. I would argue that there needs to be some type of identifier. It doesn't need to be a phone number. It doesn't need to be an email address. It could be just a link to your Facebook page. I believe it's facebook.com slash Brian Fitzpatrick or Brian dot Fitzpatrick or something like that. Again, this is something that everyone has access to. This is something everyone can see already. It doesn't allow people to do anything weird with me, but it is a disambiguator to say that when I have friends with this Brian Fitzpatrick, it's this one out of all the however many of Brian Fitzpatrick's are out there. And so I think that is key to not locking people into the platform. And that's been my whole point with starting data liberation from one is that we don't wanna lock people into a particular platform. I want people, I want people to keep using whether it's Google or Facebook or Snapchat or Instagram or anything. I want people to use that because it's the best thing for them. It's the best thing that they like to use. They're happy to be there. Not because they can't get their data out. Yeah, please. So certainly there are unique benefits associated with getting data portability and making that effective and real, including user autonomy, privacy controls, transparency and so forth. But what I will say to the theme that we've had over the past couple of interventions is something I feel like I'm saying a lot, which is a lot of these problems get much easier when we talk about interoperability as distinct from data portability. If you're talking about interoperability on a going forward basis, you don't need to extract the entire social graph. It in some ways keeps the platform operator more in control than a data portability regime would, but it at the same time solves the fundamental competition problems in a much more powerful way. My concern with, even if I took everything I had, including my social graph out of Facebook, is where would I put that? How would I get something that was replicating the nice features that I get from the Facebook experience? What happened to Mastodon? Why didn't Mastodon really take off as a competitor to that? And there's a network effects limitation here, right? You can get something like Mastodon, which is a competing social network platform, open source, open standards in many different ways. You can get that to work for very small groups of people by making a collective decision to all move over to it together. And then you can communicate over there, but the network effects made available by Facebook's two billion users are incredible. And I have to give Facebook and Google credit, their initial turning themselves into platforms in 2007, late 2000s, and so forth. They are what made the platform economy. And to me, what made it the platform economy was this assumption and this effective implementation of interoperability, this lack of fear, this bravery in saying, I am gonna give you the best product you are ever going to experience in the space. If you want to build something that competes with an interoperative with it, go ahead, good luck offering a better product than me. That's not the world that we have today. And part of the reason for that are things like the provision in Facebook's developer policies that say you can't use this API to replicate core functionality of Facebook. If we don't have a world where we're getting back to the question of what do we need to get, whether it's a data portability, or what do we need to get access to in the interoperability context, it has to be enough data to effectively create a competitive alternative to that service. Or even one that can simply, it coexist. Right, absolutely. To add on to that, I think there are, like a very simple place to start is overreaching terms of service. And Facebook has a lot, we're picking on Facebook quite a bit, I'm sorry, it's not alone in this situation, but there's definitely a lot in the terms of service about how that data can be used and how applications and developers can take that data out or not. And I think we have to be very careful when we're talking about a developer of an application versus a user. And sometimes the application does represent the will of the user. And that data does belong to the user and the user should be able to take it out. There's a project that my colleague at the Media Lab built, Ethan Zuckerman called GOBO, which basically gives you a view onto all of your social networks and lets you play with the data and sort them in different ways and sort of choose different filters and things so you can decide that you wanna see more posts by women, for example, or posts from people who have a different political view than you. And I think that if someone really wanted to build such a thing and commercialize it, they would end up in violation of many different terms of service, even though you were viewing data on behalf of a user who has access to all of that data. So I find that quite troubling. So that's a really interesting, I mean, and this sounds with some of what Terrell was saying as well in terms of user control. I'd say one concern, and Terrell has said this before, several others have, Caroline Holland, Mozilla Fellow, competition call, there's a concern that the idea of a user choosing to share data with another app is being and is going to be demonized in a way that will actually harm the economic environment, harm the competitive environment, and lead to APIs at Facebook and elsewhere, like the just tides are changing, getting locked down, locked down, locked down, which actually ends up in some ways reifying the power of the platforms and hindering the growth of alternatives and new innovations. How do we avoid that? How do I mean, because I will say, like Steve, my fear in regard to Facebook, as it, I mean, rationally responds to the pressure that is being put on it, do we get to a world where the 10 plus thousand apps is like 500 trusted established companies such that we don't have that competitive environment anymore? How do we prevent that? Can we prevent that? Or is there, have you been left no choice but to go in that direction? No, I don't think so. We certainly, we wanna prevent that from happening. One of the things that we think we need to do is to invest more in demonstrating responsibility and accountability in the developer ecosystem. I mean, what happened with Cambridge Analytica, right, was misuse by a developer, right? A developer violated the terms we had in place that govern the way that the developer could use the information that it received through our APIs. We had developer misconduct, right? So one of the things I think we need to do, and we're hearing this in sort of multiple places now, is we need to do more to demonstrate to policymakers that this ecosystem, this amazing ecosystem that Chris alluded to that has grown up around the platforms is a place of responsibility and accountability. You know, I mean, the advertising ecosystem did something like this a couple of years ago with the Digital Advertising Alliance and they have a code of conduct and yes, their shortcomings there and the DAA sometimes takes, I think is a fair criticism, but they got together across industry group, publishers, advertisers, websites, and they did a code of conduct that put in place meaningful protections for people. It's probably time for us to start thinking about something similar for the developer ecosystem for the platforms. And so that's one of the ways that we can prevent the outcome that you're describing, Kevin, is that we've got to address accountability within the ecosystem and that is a platform issue, that is a developer issue, that's everybody. So I think that's one of the ways that we can do this because right now there's a lack of trust in what's happening here. But this is where we are after Cambridge Analytica. There's a lack of trust in everybody involved. And so that's one of the things that we're thinking about, we'd love to talk more about that. But we certainly don't want to get to a place where the developer ecosystem is not thriving as it is today. Oh, opining a little bit as moderator. Again, I mean, I think a key part of that is user, a focus on user consent. And I fear that the over focus on the disclosure as the problem rather than the consent not being great because users were surprised that their friends were sharing this data or looking at the New York Times story, what's being reported is that data was shared with manufacturers even when people had said, please don't let my friends share my information. And it seems like consent should be the focus rather than stopping users being able to choose in a consenting and knowing way to share their data with other people. And one thing that really struck me about this, before you all made the recent changes to download your data, when you would go to that tool, you'd basically get the web equivalent of flashing red lights basically saying, this is a really sensitive file. You want to be really careful with what you do with it. Please don't use it for anything. This is not appropriate for moving to another service. That was another thing. But please enter your password again. Well, when it came to sharing data, including my friend's data with third parties, it was like, hey, just click this button. And it seems like it should have been exactly the reverse. But I'm sorry, I'm gonna get off the pedestal now. And invite others to comment, including you if you'd like to respond. I, yeah, I completely agree with you, Kevin. And I have to say, I really think there's another way to view this, which is not Facebook needs to control what applications developers are building and how they're using user data, but instead, Facebook needs to focus on informing users and making sure they understand what is happening with their data and giving them meaningful tools so that they are able to control that. Because I really think a world in which Facebook limits what users can do with their own data is not really necessarily the right path to go down and will limit, ultimately, innovation on the platform. I also, I wanna echo that. I agree with it and then say, I think we're also talking about transparency here. This is an old concept in privacy, but one that is pretty much a goodie, like we should keep that conversation going. Because if we can give people more control, more clear choices and have more of a dialogue and have more transparency, I think we'll have a better sense of when they are more comfortable or less comfortable with where their information is going. What's happening right now is nobody trusts what's happening because they basically have finally understood, I think. They had very little control or sense themselves of where their information was going and some of their information is quite sensitive. The more we can make something that's more like my phone number and it sticks with me and I've listed publicly and that is how you reach me and less like this is my entire life and my political preference and my religious views and my personal life, I think will be better off. I agree that basically some, your telephone number is a way of identifying you, essentially, it's no longer identifying a household like who people don't typically have landlines anymore of the younger set, but I do wanna make sure that we're clear on the difference between data portability and interoperability because I think it's easy to confound those two things. I think they're very distinct and it's very different in what you're giving to a developer versus what you're allowing me to take and I feel like I sound like a broken record here but this was my song for years at Google. It's that table stakes for data portability is what does it take to not lock people in your platform to your product or to whatever they're using. What do people need to go somewhere else and finding what that minimum thing is is utterly important as part of that or you wind up preventing people, preventing innovation, preventing competition and preventing users from making a choice. Well, so it's interesting we're basically pointing to kind of two different rationals. The first is as a matter of my own privacy and autonomy I should be able to control my data and then secondarily there is also what is necessary for me to be able to move in order to enable competition and that may implicate different things and those answers may actually be different depending on what platform we're talking about which perhaps makes it very hard to regulate unless it's in the context of a specific like anti-competition proceeding. I agree hard to regulate but let's be clear the incentives in these markets right now point one direction and we've just seen a huge regulatory intervention from Europe and they're bending what's happening. So there's a space here. I mean I'm not endorsing everything that's in the GDPR and then or heavy handed regulation but I think we have to be realistic about the economics of these markets and what happens in them absent some other intervention. And not arguing against it. I'm just noting some of the difficulties. I think there's a line drawing question as well about should startups have to have comprehensive portability things in place as they're starting. Do we want non-profit entities like Wikimedia to have to be able to export all your edits? Do we want every commenting system to be able to export all your comments? Like maybe the answer is yes but I don't know how to draw those. I don't know a sensible way yet. I mean I think there are a number of ways you could draw those lines. It could be based on size. It could be based on the amount of data that they have but I'm curious if folks have instincts on that score. Well one thing I think is very important is to not make these regulations too prescriptive and too embedded in the details of the technology. For example saying something like you can take 5% of the comments but not 100% of them is just like a restriction that's not gonna keep up as the technology evolves. So I think it's quite important to figure out what are the, we spend a lot of time thinking about this in the financial space, financial regulation, user protection, market integrity, what are your ultimate goals and give, try to have regulation that is in favor of those goals while at the same time not being too prescriptive on exactly how that's carried out because the technology is changing quite quickly. I think there's a lot of opportunity for industry to work together and try to figure out exactly what kind of data we need to be exchanged for what kinds of competition. So there's a lot of data about you on most of these digital platforms and you can cherry pick a piece of that. So in one of the pieces I've written about interoperability, I reference birthdays. Do I need to have all of my friends' birthdays in order to create a competing social network app? Maybe, maybe not, it depends on your perspective but when you think about it from an interoperability perspective, what if I wanna write a birthday bomb app that grabs all of my friends' birthdays and automatically posts happy birthday on the wall? And first of all, as a matter of social policy, I think that should never exist. But I think that- It might already exist. Yeah, I'm sure it does. I think that there's a conversation to be had about if you agree with me that the principle of interoperability is something important to preserve for competition reasons as well as others, the kinds of data you need to be effectively interoperable if you are a messaging app writing on top of a social networking platform versus a shopping service running on top of a search engine, these are different. And my hesitation with an overly regulatory approach is that government cannot alone figure out what kinds of data would be necessary to enable competition in these specific kinds of markets. So I do think that there will be a future looking role for industry to work together and not to create a standard setting process necessarily because I think although that's very valuable in some instances, it's not a necessary part of the solution here but at the very least to talk more about exactly what kinds of data need to be made available and for what purposes. Yeah, I just jump in on that. I think that's gonna be true depending on the kinds of data, right? So it may be that you need something very specific for health information that is different for, well, I don't know, kids pictures or whatever. Well, that's probably a bad example. Birthday information. No, that's also a bad example. What's not so sensitive? Social graph is a little bit more familiar. There's some fundamental data. But you know what I mean? So very sensitive forms of information that we already acknowledge are sensitive like healthcare information or financial information versus other information that we might regard as personal and private but not so sensitive. Steve, any response to this? Do you feel like there's a chance that the company's playing nice together on this issue considering the current environment? Yes, of course, I think there's a chance. You know, I mean, I think it all comes back to, I mean, this is a point that several people made earlier is educating people about how this is working. And I think if they're, what the lesson we've learned is that we needed to do a better job providing transparency around the way that platform worked. We're investing a lot in educating people about the ways that our tools work. You're gonna see more from us on the platform piece, I think, in the near future. And so I think it all comes back to transparency. I would make the accountability point again. But yes, of course, there's an opportunity for us. And I think this is, it's the time to have this conversation. So yeah. I agree. Another aspect of portability and competition worth talking about and that others are talking about, particularly a couple of NYU professors, Robert Siemens and Sam Heimel, have been talking about the issue of what about portability of not my data but implicit data about me, like data that has been derived about me that is driving the algorithms that are offering me service. Because I think the concern that these folks have staked out is we are entering a period where we have only a handful of really big platforms that have access to enough data to drive machine learning. So say for example, voice recognition. There are only a handful of companies that have hundreds of millions of people talking to them such that they're actually getting the data that they would need to train up really good voice recognition AI. And so the question becomes how do we address that competition problem? Should there be portability of that data? What these professors have suggested is actually the need for some sort of trusted third-party repositories that could actually make training sets available to new entrants, which seems technically possible. So I'm curious what the panelists think about this issue. So I love this idea, not just because Sam was one of my interns. I'm very proud of his work. Very small world. But look, I think this is an important point they're making, which is to say, right now we're talking about interoperability and privacy and competition. But what they're noting is that you need a huge amount of data in order to train AI and there's only a very small number of companies with that mass of a scale. So that becomes data as a barrier to entry for very powerful technology. Potentially, we don't really know. But if it is, I think that we have to think about policies that are aimed at that problem and creating accessible databases and data for others to enter and be able to train their technology as well. We'd need to do that in a very privacy-protective way and in a secure way. You know, a lot of the thrust of the open data initiatives that were in the Obama administration were around freeing up existing government data, also protecting the privacy of it, but so that people could innovate with it. And I think we need to make sure that we continue policy focus in that area because it's gonna be incredibly important for competition. So if I can jump in on that, especially because Mozilla has an open source, open voice recognition project, open voice, where you all can go into that project and leave your voice recording. We give you some prompts and you say it. We're trying to build a repository exactly to this purpose, to allow people to innovate with voice recognition apps based on a training set that we are offering to the world. And one of the things we learned in running this experiment, we actually got a lot more people with heavy accents when they speak English than maybe other businesses have done because we have a very global community. And so we actually think there's a potential for pushing for growth in the right kinds of ways. We could, if we really wanted to, probably create a better system with at least a larger training set repository for heavily accented English and maybe differentiate in some way in this fashion. So I do think it's possible to find the right way to do this. It's important to balance this against business incentives for the private sector to invest in improving their own training data and their own processes and so forth. But the idea of fostering, creating incentives for and finding actors like Mozilla and government agencies who are willing to collect and then share data a subject of privacy and security guarantees, like we can do this. We can build, not for everything, I don't think, but for a lot more than we are today. Yeah, agreed. If I can make two points, first of all, I mean, there are contrary views out there that sort of make the point that data is more plentiful than ever and we're multi-homing more than ever. And so the idea that data is a barrier to entry is something that I think is being disputed and it's a conversation that we should have, that data is more plentiful than ever. The second thing, though, is that when we talk about training sets and when we talk about using data to develop AI, we're talking about secondary uses of data. We're talking about innovative uses of data and the trend in policymaking and especially on privacy is to look skeptically at secondary uses of data. The GDPR makes some provision for secondary uses. There's this notion of compatible uses, right? So some compatible uses, which is allowed without a separate legal basis. So if you have data via consent, you can make use of it, further process it, as they say, for a purpose that's compatible with the original purpose for which you collected it. If you're gonna use it for a different purpose, you need a second legal basis. So there are disincentives in GDPR for secondary processing. I wanna flag a much more important proposed law that takes an even narrow review of secondary uses and that's the e-privacy regulation, which is right around the corner from GDPR. The e-privacy regulation, which has been debated in the commission, the parliament, and now the council over the past year and a half, doesn't make provision for secondary use of data. It's an extremely restrictive proposal. And so as we're talking about the value of secondary processing, which is what we're talking about when we're talking about developing training sets of data, we've gotta keep in mind that privacy policymaking may be at odds with what we're describing here. And so I think that should be part of this conversation. Look, I think this is an important note, but it underscores the fact that we're overcompensating on privacy because we do not trust the technology and the people and the companies that have the power right now, right? So we see the overcompensation in regulation a little bit because we're concerned that we don't have any control over the follow on uses, whereas I'm sure all of us in the room don't really mind if our data's gonna be used to cure cancer, we might be really concerned when it's being used to target us for certain kinds of very specific behavioral advertising or credit offers or job offers or other things that might really impact us in a way that isn't awesome, right? But we know right now we can't navigate any of that space, which is why we need more trusted intermediaries. We need government and policymakers and institutions to help us think about the frameworks whereby we're gonna be happy to have secondary uses for training for technology that we can trust. And I think we have those frameworks we can put in place safe harbors. We know how to do some of that work, some of it already exists, but we need to get it done. So could I jump in on the privacy regulation? Mozilla's perspective on this is that it has some very good pieces to it and some pieces that do concern us and which is why we've been working in Brussels to try to improve it in some respects over the past several months. I think the motivation behind that intervention was not the GDPR is long now, but it doesn't go far enough. I mean, on its face it is. It's about improving the confidentiality of communications. But I think there's something more motivating that, which is Terrell's problem, that we don't feel like we as users and the European government looking at these as big American companies feels like there isn't sufficient opportunity for transparency and for user control over their experience. And that's why there's language in the privacy regulation about privacy options. I think until we get to an ecosystem where we have more of this transparency, more competition powered whether by data portability or by interoperability or some other form of either private sector change in practice or regulatory intervention, I think we're gonna see more laws after the privacy regulation. Because I don't even think that law gets to some of the underlying concerns that are motivating Brussels policy thinkers right now. Can I just jump into on the data is ubiquitous and it's everywhere argument? Like, I think it's an important argument and some of it is correct and some of it is totally incorrect. And I think we need a little more research and understanding about where that is incorrect and how it's different in what we're talking about. Because it's certainly the case that sometimes data is still proprietary and that it can form a barrier to entry. It can be meaningful in a competition analysis, for example. But then it also is possible, and this is what Rob Siemens work with Sam Elmel is about it, that in fact, there may be such a data advantage at scale in some of these areas that it might be very, very hard for other companies to enter and compete and train their AI. And I think there's some legitimacy to that point potentially. We need to really understand that dynamic if it's true. I find your point very interesting. I kind of agree with it. The issue of privacy regulation is the only thing that people currently have to, like, it's the hammer we have and so every problem is a nail for that hammer. I'm trying to figure out how to provide more hammers. Well, we're trying to figure out how to introduce restraints into the system, I think collectively, and we're finding it very, very challenging. That's what I think, I mean, and that's what I think's happening and I think we do have an opportunity to study the impact of the GDPR as a policy person. It's a great natural experiment. Love to see what happens next, right? And try to understand what we think is useful about it and where we think maybe it's overcompensating a bit. So last words from each of you, if you want to perhaps one way would be if you wanted to see one specific thing happen in this area in the next year, what would you want to see, whether it's action from a government or a company or anything else? Or close in any other way you would like. Let's start at that end. No, I really want to get together the entire ecosystem and continue this conversation and to really focus on responsibility and accountability. And I think speaking for Facebook, I think we're looking forward to investing more in transparency around this exact thing, right? That is the ways in which you can control your data, the way in which you can take it with you and the protections that we have around it. So I'm looking forward to continuing the conversation. I'll take your prompt literally. I'd like to see something, it could come from government, it could come from the private sector, but something that really drives home what I believe to be a relatively widely accepted norm that interoperability should be made available via APIs with reasonable policies built on top of it. I would like to see more funding for basic research because I think really a little bit of what we're getting into now, I think we owe it to ourselves to think about a different architecture for the system. And I think that's what things like trusted data repositories are. It's a totally different architecture than the one we have right now. But there's still so many open questions that I think that there's a lot of work to do to make something like that practical around differential privacy, around incentives and monetizing data. So I think I would like to issue a call to fund more research in this area. I would like to eliminate the idea from the policy conversation that American consumers don't care about sharing their data or what happens to it. I think that is false. We now see it's false and we need to have a different premise for this conversation. I'd like to see people usually be able to download the data from everywhere. Portability from its perspective of taking a copy somewhere. I think we have control so that they can delete it. I'd like to see people have more choices when it comes to these sorts of things so that you can revoke your trust in one institution if you don't have that. And I'd like to see that thought of from a very particular perspective. What does it take for me to take the data out of a system and go somewhere else and use it to reconstruct what I did there? And am I talking about secondary uses necessarily I'm just talking about from a competitive perspective? Related to that I'll add I have a laundry list but one fairly straightforward. I'd love to see all the major social and cloud platforms participating in the open source development of the data transfer project. And see plenty of other people do that so that it becomes a standard feature for people to just press a button and be able to move their stuff between providers. But now I'm curious what the audience thinks and we're gonna open it up to Q and A. So if you have a question please raise your hand and we will bring you a mic. Please speak directly into the mic. Please ask questions with question marks at the end of them. And that doesn't mean just go, huh, the little lilt at the end is not gonna make it a question if it's not a question. Thank you, great discussion. The one thing that wasn't mentioned and I think although it might be a little bit peripheral to your main discussion is the ability to delete data and then the question is how much of that data must be deleted? Can you keep de-identified data for your own data analytics? That's a great question. Who wants to address it? Can I just ask for clarification? What do you mean by identified data? De-identified data, de-identified, anonymized effectively or de-identified as the specific word you chose instead of anonymous. Well, I personally believe that you should be able to delete your data. I will say that we have to think carefully about what that means. If I say I wanna delete my data, that means I wanna gone. Does that mean I wanna gone right now? Well, actually hold on a second, not necessarily. We talk about re-authenticating. We put a lot in place when you're exporting someone's data, when you're doing a takeout, for example. That's a suitcase of your data. You pick it up and you run off. So we Googled this in very specific things to make care, as Facebook did, that enter your password again, et cetera to make sure it's you. If someone gets into my account somehow and deletes on my data, gosh, I sure hope there's some way for me to get in there this week next Friday and say no, no, please undo. So I wanna keep in mind that there's a time, a sort of a grace period of let's be careful on that. And then beyond that, thinking about companies talking about having a lot of data, it's like finding a needle in a billion haystacks. Give companies a little bit of time to find all that data and make sure it's all deleted, including backups, copies that are, you have things in RAM, you have things in SSD, you have things on tape, if you wanna get really, really old school. So, but I think that that's part of your control if you should delete that. Now when it comes to de-authenticated data, if, and this is a tricky thing, if it's truly de-authenticated from the perspective of, it's just like web data you crawled or something like that. If it's not authenticated to a certain person, how could I delete that, I would say? Anonymize data about myself. That's a tricky question. I have to think more about that, I've answered a lot, I guess. I think deletion is important. I think correction is important, especially if we're tracking about data brokers, I think more of a FICRA access correction policy would be good there, particularly. It is challenging, and I guess I'd add a consumer protection note, which is that if a company is purporting to offer deletion, they actually need to do that. Deletion means deletion, okay? So whatever else it is, that should be explained as well. I'll just add, because I don't get to talk about this issue much, and I think it's important, there's an interesting kind of lock-in problem if your only choices are delete individual items when you have literally millions of them or hundreds of thousands of them, or delete everything. And it seems like there's a real dearth of tools on all the platforms, in terms of like, if I have a decade of Gmail, and I wanna actually, no wait, I don't want that much data hanging out about me, but what tools do I have to actually, is there some smart AI that can help me figure out what I actually need and what I can throw away? And if I don't, I'm just sort of stuck, and I have to keep all that data. Or this came out, I noticed that I've been an Amazon user for 20 years. 20 years of all the Amazon purchases. Like, but there's no, but I can only delete them individually or delete them all. Yeah, that's a design, to me that's a design challenge, which gets into this question of like, what are the defaults? What are the incentives in the market that are driving that design and that optionality for you as a user? But even just having like time limits, you know, anyway. This makes me worry actually, I'm not sure that I agree that we should be able to delete our data, because I worry that we're using it as a proxy to solve some other problem that deleting data is not really solving, right? I don't think that I wanna go to Amazon and click a bunch of checkboxes about exactly how many purchases in my history they can keep. There's some other sort of higher level goal that I'm trying to achieve. Maybe it's something about curation. Maybe it's, you know, I don't want things to come up. Maybe it's, I don't want companies to be able to de-identify me later or some embarrassing purchase to find its way onto the internet. But I don't know that, I worry sometimes that we use deleting data as a proxy for that. Like, we think we can just throw away an unflattering photograph and digital information just doesn't work that way. That's an excellent point. I would like to clarify that what I'm talking about is authenticated data that you've put somewhere. I'm supposed, I wanna be clear, I'm not addressing data like about you as in right to be forgotten. Should be able to go and delete a newspaper article about me because I did something terrible. I agree. Sorry, I would agree that. And we have a First Amendment in this country. It's really important. I agree. Next question. Okay, so. Thank you for all for joining us today. So I'm here to talk a lot about issues with interoperability and transparency and being able to use data across networks. I'm wondering if this goes back to decisions we made in the early odds about what we wanted the economy in the internet to look like. Facebook, for example, did for a moment float the idea of having people pay for a service. I believe the original amount was about $1 and people all said no. So do you think that people are now having buyers or more says it were? And thinking instead they are willing to pay for it because if we, I'm just concerned about how the economy will work on the online if these platforms aren't able to monetize the data in the way that they have. I'll go first if I could. I used to have a, what I at least felt was catchy line about Silicon Valley where the culture of Silicon Valley is collect all the data, store all the data in perpetuity so that you can monetize it later. I still think there are many businesses that operate that way, increasingly few though. In the platform economy, I think the modern day equivalent of that is keep all the users on your platform, don't let them off of your platform so that later you can figure out how to make even more money off of them. That is a consequence of where we are now. I still think this is a trade off that many people consider good which is why you don't see people pushing Facebook to make a four price service. This is what I think competition is meant to serve. If there were a competitive ecosystem where a social networking provider were able to interoperate with Facebook make different kinds of guarantees about what they do with the data and charge users for it, maybe we would see a shift and maybe we would see experimentation and innovation within the market in a way that we can't see today. Yeah, I just wanted to add to that. This is why I work on cryptocurrencies, actually. This is a big reason because I think the development of the internet didn't take into account payment and incentives tightly enough as it was being created and I think we have to rectify that now because we're paying in ways that we didn't anticipate paying in but we are still paying as to whether users will actually pay for such services. I think that we're gonna have to get pretty creative and come up with different kinds of models. Some of them might be based on things like speculation but I think that a big problem has been it's hard to collect. Like we don't really have a way right now of collecting micro payments on the internet and so I do think that we need to sort of open up payments, open up means of payments and start experimenting with them. Yeah and I should say first of all Facebook doesn't make money by selling or sharing people's information, right? Facebook makes money by showing ads. That's absolutely false, sorry. It's just not true, we sell ads. Maybe not directly but I think ultimately part of the reason people use Facebook and want to sell ads on Facebook is due to sharing of data. We make ads better through the data that people share on Facebook, that's correct. We don't share people's personal information with advertisers. We don't sell any information to anybody, right? We monetize through ads in the way that services have monetized through ads for 100 plus years at this point. The way that the TV and newspapers have monetized through ads. Our ads are more relevant by virtue of the information that we have about people, absolutely. But we do not sell or share people's information in exchange for money, yeah. But so in terms of the question though, like I know that y'all have been asked about and I'm sure y'all are therefore thinking about what would a pay subscription version of Facebook look like? Is that a possibility? I mean, I've heard y'all's reasons for why y'all don't think that's a good idea but perhaps you could articulate. Yeah, I mean we think that the ad supported model, which is the free model, is the best way to accomplish Facebook's mission which is to connect people, all right? So getting them on to the service for free is obviously important to achieving that mission. You know, are we talking, have we talked about building an alternative? Yes, we have and I think that we've shared that recently. But I think that there will always be a free version of Facebook because ultimately the goal of Facebook is to bring people together and the way to do that, the best way to do that is to do it through free service. Could I give a brief response to that? This is gonna be a little kitschy, I guess, but you have a great talking line. We don't sell users' data. My response to that is maybe you should. And let me tell you why. When I talk about interoperability, I knew y'all was gonna get that reaction. This is not what I expected you to say. When I talk about interoperability via APIs, I don't mean that those APIs all have to be free. If you were to offer API access to the sort of core of news feeds, I think it'd be a great model if you offered that kind of data for interoperable services up to a certain bandwidth volume limitation, however you wanna measure it. And then beyond that, if somebody's really able to use that core functionality of the platform and make a successful business off of it, maybe they should share that with you. Now, there are a lot of things that we'll have to enter into that. I'm curious for other interviews on this. Can I just say, I think it's an interesting question. I don't think we actually know the answer to it yet. It would be far more interesting to see what the market developed and what the options were. Would people like to pay? Who knows? I don't really know. Do they like the free ad-supported model? We're not sure. Do we think there should be some restraint on the free ad-supported side? Maybe, I think probably we all would agree around some of those restraints we have had that conversation in the past. So it isn't obvious to me that it has to be all one thing or all the other. Dealer's choice. Yeah, just on the point on advertising. You know, I don't think it's true that advertising has for hundreds of... Questions? Has been based on... Yeah, I'm asking. What do you think the difference is between advertising and behavioral advertising? And what percentage of the advertising on the internet actually depends on the collection of this minute demographic information on people? I mean, Google search ads don't work that way. I go to the Verge, there's a big app on top. It's not behavioral targeted. I see advertising all the time and it's existed throughout the 20th century that did not involve the invasive collection of personal data. So why don't we just go back to that? That it is simply false to say that it's a choice between giving up your privacy or having to pay for everything. I mean, that's just a very annoying distinction that I keep hearing on me. And I don't think that's the choice that's out there. I mean, was there another... What's the difference between it? Well, first of all, it's the difference between paying 10 fold to reach the people that you wanna reach. It's incredibly efficient and targeted advertising has enabled small businesses to advertise and to reach people in ways that they never could have if they had to rely on broadcast media. So one thing is behavioral advertising is much more efficient. And this is, Facebook has six million active advertisers. The overwhelming majority of those are small businesses who couldn't afford to reach the people who care about their stuff if they had to advertise in broadcast media where there's the old saying, 50% of my advertising is working. The problem is I just don't know which 50%, right? I mean, the value of targeted advertising is that it allows people to reach the people who are most likely to be interested in their stuff. It's efficient, right? It's efficient. It's also efficient for people. If I'm gonna see ads online, and this is me, I wanna see ads that are relevant to the stuff that I'm interested in. I don't wanna see the belly fat ad, right? The belly fat ad that you get, I don't wanna see the belly fat ad. I wanna see the stuff that is gonna be most relevant to me. So there's value to people too. Are there privacy concerns with online behavioral advertising? I think we've heard concerns about online behavioral advertising for years. I think that the industry has responded meaningfully in getting together and putting together these self-regulatory principles that do require companies to give people choices. We give people choices beyond what's required in those self-regulatory principles. Yes, is it, do we need to be aware of privacy concerns when we're using people's data? Absolutely. But is there no value that comes from targeted advertising? Absolutely not. Targeted advertising incredibly valuable for small businesses, for big businesses, and for people. Okay, so I'm just gonna say, I think there's a lot of efficiency in the behavioral advertising and the targeting and all of that, that is a true economic point. That is why we see the market share of advertising shifting to these platforms, right? So there's an explanation for that. What we don't really understand right now is at scale with the intensity level, with the amount of data and the velocity of the services here, what is happening to people who are experiencing that kind of targeting. And I think that's an interesting question. So if people are really being manipulated and if they really are having, their emotions manipulated or having some kind of impact that is like far more intense than just being served ads that are relevant to them, well, that's something we maybe should be looking at. Agreed. I thank you all again. Actually just going off of that, this may seem like a little bit of an insensitive question, but I think it's an interesting one. How much of our issue as consumers with the behavioral or targeted advertising is actually just a proxy for a different concern? Putting aside the issue of incorrect information or incorrect inferences about our behavior, how much of our issue with targeted advertising, consumer issue with the targeted advertising is about not liking what's being observed about us. You know, it's funny, and this is anecdotal and not philosophical. You know, the complaints that I hear, and look, I work for Facebook, so I hear complaints about Facebook. I'm here today to hear your complaints. The complaints I hear often about the ads on Facebook is that they continue to get the ad after they bought the thing that's being advertised. I hear that all the time. So it's the ad that is targeted, it's just a little off, right? You're about three days too late. And so, yeah, I mean, there are a range of concerns about targeted advertising that relates to privacy, but also just relate to the way it works. I think a lot of the concerns that we hear are not necessarily privacy, they're just confusion. Wait a second, I was on the Nike website, now I'm seeing the ad for the Nike shoes. Does that mean you told Nike I'm here, Facebook? I mean, I think this lends itself to the myth that we share personal information with advertisers. How would Nike know that I'm on Facebook? The answer is that they don't, right? But I think that targeted advertising, generally, is something that leaves people mystified. I think it leaves them dissatisfied in some cases. And there, of course, are valid privacy concerns around any technology that involves the collection and use of people's information. We have to put appropriate protections in place for those. Okay, one more question. Please make it quick. The other strokes. Hi, I'm really interesting. So ad tech is super complicated. Like really, for technical people, it's really complicated. And at the center of all this is that there's a lot of stuff happening that just people don't have informed consent about. Like just they don't understand it. My question, so here's one thing that is relevant to sort of data export or portability would be like inside of Facebook as well as data brokers and other companies. It's not just the data I'm providing, but Facebook is buying other third party data as well as scraping data and other things to enrich the data I give you. So I might give you the base record as a user. I give you the elements you need, Facebook or whomever, to build a much bigger profile value. But then if later I say, so it's like 10% is from me and 90% they develop themselves. If I decide to pull the 10% away, I guess my question for you guys is what should happen to the other 90%? Particularly if it wouldn't have been possible for that provider to create the other 90% had it not been for the original ingredients I gave them with the 10%? I'd love to see that data. Panelists? You can see that data on Facebook, right? I mean, if you go to add preferences, you can see the interests that we have inferred based on your use of Facebook that are associated with your account and advertisers can use to show you ads with. If you feel like we have miscategorized you, you can delete those interests. To your point about data partner data, data broker data, yes, Facebook has worked with data brokers, has many companies in the advertising industry. Our partnerships are often the focus of people's attention because we are very open about them, we are winding them down. We announced this a couple of months ago. We will no longer make available to advertisers data broker categories in our ad targeting tools. So I do wanna make that point. But to your larger point about, should you be able to export that data? I mean, I'm very curious to hear what folks think. Right now, we do allow you to export a lot of ads data, not that we think that you would want to use that data on another site but that you absolutely should have access to it and the ability to control it. But I am interested in the kind of observational data that the Article 29 Working Party addressed and it's paper on data portability and what the panel thinks about the importance of being able to port that stuff. I think it's great but I do always wanna differentiate that from data that you've explicitly input or created data about you is a different thing. I think it's great if you can download that but I would say that you're getting a little higher in the hierarchy of data needs when you're talking about that. On that note, I think we're gonna have to wrap it up. It's time for folks to enjoy the bar. I really wanna thank all of our panelists for coming for a super interesting conversation. Hopefully the first of many on what is obviously a pretty complex topic. So thank you all. Thanks again to the congressman. And to the office. Thank you very much. Yeah, really nice meeting you. Thanks a lot. Thanks Kevin. Sorry, thank you Mike.