 All right, well, welcome everyone to our event today. So glad to have you all with us. So today's event is gonna be about transparency reporting across the tech, health, and other sectors. Hopefully for those of you who work in different industries, you can get some good takeaways today. So to get things started, I'm gonna introduce, well, to introduce myself, I'm Tanya Riley. I work with the Washington Post Technology 202 and Cyber Security 202 Newsletters. I actually used to work here in New America, so it's nice to be back. And for our speakers today, we have Rana Hortam. Rana is a manager on the Global Women's Safety Policy Team at Uber in San Francisco. David Lieber, David is a senior Privacy Policy Council for Google based in Washington, D.C. Spandising is a policy analyst at New America's Open Technology Institute, our host today, where she works on platform accountability, content moderation, and artificial intelligence issues. And Alan Kachalia is a JD MD and senior vice president of the Patient Safety and Quality for John Hopkins Medicine and director of the Armstrong Institute for Patient Safety and Quality. So welcome to all our panelists. So for those of you who are here, you might be having a question, which is what the heck is transparency reporting? And I think we were just discussing in the green room that needs a lot of different things to different industries and different people. So to get us started, I want all the panelists to sort of tell us what it means for them. David, Google has sort of arguably been doing this the longest of all the panelists here. So can you tell us sort of, I think it would be helpful for you to share what the idea of transparency reports means to Google and how you guys got started with it. Sure, thank you, Tanya. We published our first transparency report covering government requests for user data, I think in April of 2010. So we're coming on our 10th anniversary of that report. And since that time, we've published 11 additional reports and they generally fall into three categories, privacy and security, content removals, and then a catch-all, which really encompasses two reports, one on political advertising and the other on traffic disruption vis-a-vis our products and services across the globe. My expertise really lies with the report on government requests for user data. And we launched it in April 2010 with the notion of providing more insight just not only into the volume of requests that we were receiving, but also the nature and type of requests that we were receiving, which is, I don't wanna say simple, but at least under US law, we've been able to categorize how many subpoenas that we're receiving, how many court orders we're receiving for certain types of records and information, how many requests for content. And we've been able to publish that on a biannual basis since 2010. And the objective obviously was providing more insights into the data, providing some granularity about the types of requests that we were receiving. But I think there's also a broader objective that runs across our reports, which is to understand more about the policies and actions of governments and also companies as well. There's an accountability measure there. And we certainly have seen, and we'll probably talk about this a little bit more, it's been actually very important in informing the public debate around government surveillance and government access to user data. So we've been heartened to see that this data has actually been useful in public policy discussions. And we continue to work on new transparency reports, iterations of existing transparency reports. Some of them are updated every day, others are updated every six months. But I think the overarching philosophy remains the same. I don't do enough. Yeah, and I think the funniest part is that Google's been doing transparency reports for longer than Uber has existed altogether. So we're up and coming at this. We do publish a government access to user information transparency report that we actually just updated two weeks ago, and we do that annually. But I would love to talk about how we see transparency as Uber straddles the intersection between the virtual and physical world. We do 45 trips per second in the US alone. So every second, as we're sitting here, there are literally 45 Uber trips happening out there in the real world. And that does not come without its challenges. If something happens in the real world, all safety issues that plague every industry and community from road safety to personal safety to sexual assault and sexual misconduct, if it happens in the world, it'll be reflected on our platform. And we feel that it's our duty as a company to let the world know the magnitude of the problem and shine the light on these tough issues. It doesn't mean that by publishing a safety transparency report, which we committed to doing last year, and we're on track to publishing it later this year, that Uber is less safe or unsafe or any different from what goes on in the real world, but rather that we are telling you what exactly is going on and hopefully inspiring other companies that operate in the real world to be as transparent so we can all do something about it. And Alan, obviously you're very focused on real world outcomes dealing with medicine and patient data. So can you tell us a little bit about what transparency reporting means to you? Sure, it means a few things. I think when we talk about healthcare, I think when people talk about transparency healthcare, one of the first things they think about is if a doctor or hospital makes a mistake, are we actually making sure the patient knows about it? And we have not done well historically in healthcare with talking about mistakes that have happened in healthcare, especially as the patient safety movement has gotten a big push. We're seeing people get better at it, but to me that's just one aspect of transparency because when a mistake's made, patients should know because frankly, it helps them understand how they can manage their care better going forward and helps bring closure to difficult events. We've actually seen the government get involved with regard to transparency in healthcare as well, which has to do with outcomes. For those of you that are familiar with CMS, it's our big government payer, they require hospitals to submit data on certain process and outcome measures that you can actually go on the web and look up with regard to how hospitals do, with regard to heart attack care, pneumonia care, or even patient experience. And that has really helped, been in parallel with what we've seen healthcare institutions doing, which is putting their outcomes on the web on their own websites for patients to see. And the idea is we actually know from data that patients don't necessarily look at outcomes like the sites on CMS or the ones that hospitals put on, but they seem to drive the industry to do better because those that lead these organizations want to be some of the better performing ones. So a lot of us believe that not only is it good to put the data out there, but it can drive improvements. And the last piece around transparency that we think about, and I think this is what Google deals with more is how is patient data being handled? I firmly believe patients should know where their information is flowing to or where it's being collected. And that way patients can understand where what they believe is their private information, who has it and who doesn't. So I think those are the three big areas I think about when I think about it in healthcare. And Spandit, your work is really sort of researching and pushing for social media companies and these tech platforms that may not be regulated by government law to be more transparent and accountable to users. So can you tell us sort of what that means to you? Sure. So OTI has been a longstanding advocate for tech platforms to demonstrate greater transparency and accountability around a range of their practices. So in 2016 we published our first transparency reporting toolkit which outlined best practices for how companies should be reporting on government requests for user data like David talked about. Last year we released the second transparency reporting toolkit which highlights best practices around reporting on content takedown requests and processes that companies have. And then more recently we've started pushing companies to demonstrate greater transparency and accountability around their algorithmic decision making practices, particularly how they use algorithms to shape the online experiences of users. And for us transparency it's a very valuable way for companies to build trusting relationships with the public and with policy makers and for them to spotlight important policy issues. But it's also an important way for the public and policy makers to hold these companies accountable for the whole gamut of their operations. And Rona, so you mentioned, you talked a lot about sort of right, we think of Uber as this tech platform but it's also facilitating real world interactions. Can you tell us a little bit more about how, you're releasing the safety report soon this process of developing a way of quantifying those real world interactions and sort of providing accountability to users. So we made the commitment last year to publish our safety transparency report which will see us voluntarily and proactively publicly release information about our safety practices and measures, policies for driver screenings for safety products, as well as data on the most serious safety incidents that took place on our platform. And those include motor vehicle fatalities, personal safety incidents, as well as sexual assaults that took place on our platform. And the issue is when you're talking about the most serious incidents, no number is gonna be a good number, right? What you're talking about, someone who passed away in a car crash or someone who got sexually assaulted, even raped due to an Uber facilitated trip or on trip or because of that. And while we recognize that no number is gonna be a good number, it's important for us to hold ourselves accountable to the trust that our users, riders, drivers and third parties that exist in the real world that we operate in accountable to what happened on our platform, what we're doing about it, to address it, what we're doing to prevent it and also the numbers of what happened on our platform. And we're hoping that the rising tide will lift all boats, that all other companies will join us in taking this admittedly bold step. We hope that people will react to the report positively and that it'll be seen for what it is, a tool to raise the bar on accountability and ensure that others talk about the tough issues and really put it at top priority to address them and talk about them openly. Just because companies are not currently required to publish safety reports or to release numbers and not just tech companies, no other industry really publishes much, especially around sexual violence. And just because the numbers are not out there, it doesn't mean that the problem is not happening. Sexual assault is the most underreported crime and the least likely to result in a prison sentence in the real world and the stats are devastating in terms of how underreported it is. Only 25% of sexual assaults are reported to law enforcement and that's only for the very serious ones. And when you look at college campuses, one in every four undergraduate women has experienced sexual assault in their lifetime, like within the first year of even being on campus. So it happens everywhere. People are just not telling you about it and no one's coming forward. So we're hoping, given our size and scale and the fact that we become a reflection of what happens in the real world that we shine the light on these issues and inspire others to do more. So you just outlined a lot of obstacles there in terms of collecting this data, in terms of it's not actually required of most companies. Did you guys have to start from scratch to even develop a reporting process or did you sort of build off what was already there? So to be very honest, like we were starting from a blank slate. We made the commitment and then we were like, all right, what do we do now? And when we looked at our data, we could not find a consistent way to classify especially incidents of sexual assault and misconduct. It simply didn't exist. And it's not our place. We're a tech company, right-chairing and food delivery. And we're a tech platform. It's not our place or our expertise to categorize incidents of sexual violence. So we had to resort to the experts. So we worked with the National Sexual Violence Resource Center. We came to them with a problem and they developed a taxonomy for categorizing and counting incidents of sexual misconduct and assault consistently. And then we had to go back in time and apply that taxonomy to safety incidents that were reported to us. And that's why we announced it last year and we're publishing it later this year. It was a very thorough effort, but we wanted to make sure that we get the data right so we present the problem as it is and we hold everyone to a high bar of data quality and that taxonomy is open source and available for any other company or industry or transit provider or hotel company to apply to their business as well. Now, Alan, I know we think of medicines as really qualitative field, but you had mentioned sort of, in thinking about patient quality, there actually isn't really standardized measures across different hospitals or medical care systems. So can you tell the audience a little bit about the challenges there and even developing these standards and sort of how to create a consistent reporting process? I'd actually like to build a little bit more on that because it resonated a lot. I think one of the biggest things that we all need to remember is that as we ask for more transparency from any industry that's providing a service to us, the numbers are not always gonna look good. In healthcare, we have errors, we have deaths that result from medical errors and if we punish everybody for putting those numbers out there, we'll never get better. And the key is to get the numbers out there so that people understand what risk there is and that we can all collectively work on it together. And I think that's the key piece that I think that we need to continually push for with regard to transparency. And measurement is a challenge. It's a big issue in healthcare. CMS has developed a lot of measures that they want us to report on, but there's two fold in the challenges. One, well actually there's three. One is actually getting data out of the system. Many of our electronic systems have been designed to be transactional rather than helping us analyze how we've been doing with regard to performance and they were built without data abstraction in mind. So what we're learning in healthcare as we develop these measures is getting the data out turns out to be a big, big process for us. The other big challenge of course is getting people to agree on what is the right thing to measure. If you ask 10 different hospitals the same question, you might get 10 different opinions and coming to consensus on that is very hard. And then even once you come to a consensus that a measure's important, how do you define it? What elements are you gonna abstract to make this measure come to life? It remains a big challenge. What we're seeing, and there's been a lot of talk in the federal government about this lately too, is what we're seeing is that despite the fact that we've had a number of measures in healthcare that we've been trying to improve upon, people still feel like the quality's not improved as much as we'd like it to. So there are many of us who are pushing for better measures and better metrics and that's just gonna take time for us to develop. If I can build off of that. That's something that we've also seen around things like content takedown reporting is this challenge of metrics and standardization. So for example, right now both Facebook and Twitter publish transparency reports highlighting how they remove content based on their content policies. But Facebook reports on the number of pieces of content they remove for each category and the number that they've identified. And then Twitter reports on the number of accounts they've identified and removed. And so that makes it difficult to actually understand how these companies are operating as an industry and how these initiatives and efforts are actually impacting user expression. David, do you have thoughts to share on that? Yeah, it's an interesting question, the question around standardization. It's challenging and the work that OTI has done is stellar in terms of providing a load star for either companies that haven't reported before and or smaller companies in terms of what are the best practices and when I report this type of data. It becomes, I think for the reasons you mentioned a little bit more challenging because companies can use different metrics. But even with a specific piece of content where we're thinking about content removal, companies will have different policies between and amongst them, but also between and amongst their services. There may be some services that are less restrictive and more open in terms of the types of content that they won't take down and then others that they will take down based on community standards. The legal obligations that companies are subject to complicate things further because they may be more oriented towards specific types of companies or services, social networking as opposed to something like search or vice versa. I think from the standardization perspective it's a really interesting question and I wonder sort of wonder aloud in a world where companies are focused on trying to provide metrics for all of the right reasons that they may be less innovative in the sense that they know that there's a checklist and that's what they're aiming toward rather than sort of thinking about how do we move this, how do we move our reporting forward and also how do we inform the broader debate? So there are those sorts of tensions that I know exist with that kind of issue. And David, I was curious, so we're talking about reporting a lot of numbers that people have a visceral reaction to sexual assault, patient care, hate speech. One of the things you focus on at Google is this idea of government requests and I feel like we all think of that as like, well, is that a good thing? Is that a bad thing? Like what do these numbers actually mean? I'm curious if you could talk a little bit about how you take those numbers and make them accessible to consumers and actually make sense of them. Yeah, so I think when we first started reporting in 2009, worldwide we received something in the neighborhood of 12 or 12,500 requests and that number is now in 2019, somewhere in the 63,000 range. So it's grown considerably. I don't think that's necessarily reflective of the government's appetite for user data as much as it is a reflective of how much we've grown as a company and how much our user base has grown. But I do think that those numbers writ large because they don't really tell a story. I mean, they're susceptible to characterizations, but we've tried to figure out, well, how do we help, when I say help inform the debate as an example, when we testified in 2013, I think before, well, I can't remember if it was a senator, House Judiciary Committee on Updating Electronic Communications Privacy Act or representative from the law enforcement community and testified that companies, or at least some companies have a policy of categorically refusing emergency requests under the Electronic Communications Privacy Act. And for those of you who don't know what that means, the Electronic Communications Privacy Act authorizes but does not require companies to disclose, voluntarily disclose data in emergency situations where there's a serious risk of bodily injury or death. It was not our policy to categorically refuse those requests and demand compulsory legal process, but we were sitting there next to this representative and we were certainly concerned about the implication that this was Google's practice. We certainly, the equities here, obviously privacy and security of our users is important, but in the same vein, so is public safety when we have the ability, for example, to find an emergency situation to prevent somebody from being the victim of a crime or serious bodily injury or death. And we wanted to provide more insight into that particular universe of requests as a result, I think, of what was being implied in the testimony. And so in our next transparency report, we did disclose that data. What it showed is that we got requests for this data and that we tended to provide data in the majority of those instances. The instances where we did not disclose data were in some cases because it was not truly an emergency involving a serious risk of bodily injury or death. One of the other important pieces of context in that debate was that there was a proposal at the time to change the law so that if law enforcement came to Google and to other companies and said, we have an emergency, we would not have had a choice. We would have had to disclose, almost as if the magic words emergency were said and we would be reflexively disclosing data. We resisted that. We thought it was a bad idea not only from a public policy perspective, from a public safety perspective. We think the processes work quite well. We think our numbers bear that out. And as a result of publishing that data, other companies followed suit. And I think what policymakers saw was that companies were not categorically refusing these types of data. They took them very seriously. And I think since then, companies have talked and been more transparent about this because these are the highest priority requests for companies. We and others are staffed on a 24-7, 365 basis to respond to them. And we tend to get back to law enforcement agencies often within an hour, but certainly within hours when we get them. So I know tech companies, Facebook, Twitter, have seen sort of a record number of government requests in the past year. Spindy, I'm wondering if you've noticed similar trends in terms of how those social media companies are responding to requests, or if there are any takeaways at large. Yeah, I mean, I think, like David mentioned, there is an increase not only because of governments recognizing that this is something that they can do, but also because of shared size of platforms. But I actually wanted, if I could just take that and run with it, I think it's important to recognize that tech companies are not the only kinds of companies that receive these requests. And so, for example, universities receive these requests. So schools like UC Berkeley have actually started issuing their own transparency reports on these issues, specifically for government requests for personnel data, so like students and staff members and any contractors that they may work with. And at the same time, there are a number of companies that comply with these requests, not because they're legally required to, but because they voluntarily choose to. So last year, Motel Six and 7-Eleven were found to have provided user information like guest lists to immigration officials. But they, as far as I know, they don't issue transparency reports. And so there are a number of insights that I think these kinds of companies can draw from companies that do issue these reports voluntarily that they can use to rebuild trusting relationships with their consumers. Does anyone else have thoughts on sort of that balance between transparency and consumer relations? I can add a little point here. A lot of the knee-jerk reaction to hearing that safety incidents take place on Uber is why don't you just tell the police about it? Why don't you let law enforcement know, and like this should be the right thing to do. But then, when you think about it, especially for cases of sexual assault, we work with the experts again for advice because that's not our place to decide. And they tell us that it's important for victim and survivor agency to let them make that decision for themselves. Sometimes somebody's reaching out to let us know that something happened to them on our platform, sorry, to make sure that that person is taken off the platform and cannot do that to anybody else. But they don't want anyone else to know. They don't wanna go through the revictimization and the trauma of reporting to the law enforcement if they don't want to. And sometimes they might not think that much will come of it, or they might have other reasons for not wanting to report to law enforcement. So we do recognize the responsibility to share the numbers publicly, let people know the magnitude of issues that exist everywhere and represent themselves on our platform. But for individual cases like that, we can't make further decisions. And all we can offer is collaboration support. And in case survivors decide to choose to report to law enforcement, we collaborate with them to provide all the information that we can. I feel like a bigger theme that's emerging here, sort of these different trends in corporate governance. We talked a lot about the tech industry. Alan, I'm wondering if there are things you're seeing in the healthcare industry that are either similar or are changing the way we talk about transparency? I think we grapple with the same types of issues, which is just getting that data out there is what a lot of us believe will lead to improvement. And it's getting people comfortable with understanding the numbers won't be perfect. It's just as we were talking about earlier. I think that is what we're seeing in the corporate, in healthcare as well. We've started to realize that if we really want to get better as an industry, we've got to come up with numbers that we share with our boards, share with our patients, share with our providers, so that we all understand where we're at. I actually do think we need a little bit more standardization because you can't compare one to another unless it's being measured the same way and that is a big challenge that we have not... Yeah, fix, sorry about this, I thought I'd put it on some words. Clearly I didn't, my apologies. So I think that's what we're seeing in healthcare and we continue to see a big push in that regard. I would, just one question I have that I think about because I'm a user of Google and Uber. If the government comes and asks for my email information, does Google come and tell me or how do you guys grapple with that issue? When do you decide whether to tell me that my data was taken or not? Yeah, that's a great question. I mean our policy is to notify our users unless we're prohibited by law when the legal process is accompanied by a non-disclosure order and then we're precluded from Notify and then we can't notify. But this is a real source of tension and I think that continues to linger in a lot of the debates that we have as other countries, including the European Union now are approaching their own standards for digital evidence. But in the physical world, you can probably just watch enough sort of old school television to know that notice is contemporaneous with the issuance of a search warrant. The cops are at the door and they're coming into your house. From our perspective, at least at a policy level, like the same rules that apply in the physical world should apply in the electronic arena as far as providing notice because there are important equities on the user side. They may have information in their email that is subject to a legal privilege or they may wish to contest the underlying legal process that's been issued. So for those reasons, I think Google and other companies as well, we've been very forward-leaning and we're acquiring that notice. Some folks think that that actually should be an obligation that shouldn't just fall on the companies but should apply to the government as well, direct notice to users. I mean, David said it much better than I ever could, but we kind of think about it the same way. We believe that a lot of it lies with the government in how regulations around user information sharing are drafted and we require due legal process before releasing any user information and I could speak to you off the panel about my previous life in a very different market where I covered these issues for Uber where you have governments that ask for data without due process, without even at a very large scale, including personally identifiable information. And it remains very challenging to get regulations to the right level and balance between privacy and safety concerns. And I think the concept of notice is also something we've seen companies expand into also with content takedown. So if your content is removed for violating the policy, then you should be notified. Or if you flag someone's content, you should be able to be notified of the outcome. And that's something that we've also been pushing companies to provide around their use of algorithms as well. And I asked about that simply because that's another aspect of transparency. People understanding how you're working or how the platform is working. And we're gonna have that challenge in healthcare too, as we talk about interoperability of electronic health records. Everybody believes, and I get it, if information's available to all the doctors, if you're in hospital A and you can see the records from a patient in hospital C and B and C, obviously you can do a better job of delivering better care. But we need to make sure patients understand that their information may be flowing from one hospital to the other or flowing from the pharmacy into the hospital or the clinic or wherever they're getting their care. And I've just tried to figure out, how are we gonna make sure patients know and understand this? Cause that's the key to all of this. In building on that, as we work towards publishing our safety report, we got a lot of feedback from advocates we're working with on why do you just include the numbers should also include what you're doing about it. What do you do to ensure safety on your platform that people get in the right cars, that people have every possible means to report things when they do happen, that you have thorough background checks that people are held to high standards of safety on safety, both on the rider and driver side. So yeah, it matters a lot just as much to also talk about what you're doing about it. Alan, you made me think of this, but this idea of interoperability, data portability is something we think about in tech too. In fact, there was a hearing this morning on privacy and data, and we're talking a lot about government requests, but it's not just sharing data with the government, right? It's sharing it with advertisers or other companies. And I'm wondering if anyone has thoughts on sort of transparency around those practices and sort of what themes are developing as we think more about data privacy? Yeah, I'm pretty confident. I know that we don't share our data with marketers, and which I think is a good thing, right? But I think this is a great question because we've learned from watching on TV, at least I've learned from watching and reading the news, that this information is incredibly valuable to marketers and other companies, and I think the one fundamental decision we haven't made in this country is whose data is that? And I think that's an open question. I personally believe it is the user's data, but I don't think that's where we've landed in this country, but these guys will know much more about that than I want. Yeah, I mean, I think one of the things that we're working on at OTI right now is pushing companies to disclose to users how they share this data with parties, like advertisers, and to also give them more controls over whether they want that data collected in the first place shared, whether they want that data to inform algorithms that advertisers can use. And so I definitely think that there's a lot more to be done in terms of giving users more agency over their experiences, for sure. I mean, we've done data portability since I think 2010 or 2011, and I know at the time it was a radical idea, a radical notion that you would build a team of engineers whose sole purpose it was to ensure users could get their data out of Google as quickly and easily as possible. But I think obviously those discussions have matured quite a bit to the point now where data portability is being talked about as an important component of privacy legislation and the rights that users have. And I think when you look at the type of proposals that are put forward, putting aside for a moment the question of whether users own their data or not, the presupposition is that they can take their data and move it to another service and then having that ability promotes competition in the marketplace. We've been spearheading a project and other companies are part of it too, the data transfer project that we hope will encourage and spur that type of interoperability so that users can avail themselves if they choose of different products and services that might provide similar functionality but that they wouldn't necessarily want to build up their data set from scratch. And from our end, I guess our view again intersects the real and digital world. We tend to only collect what we need. So like we don't for example, collect proactively any demographic information, especially on the rider side, but on the driver side we have to do background checks for example. And that's why we work with a third party to do our background checks which include motor vehicle records, criminal records and we have technology that pulls that on a proactive basis continuously to make sure that we capture anything on an ongoing basis. So we're still new to many of these challenges and we gotta solve for the right balance between having just enough data to ensure that we're safe in the real world but not collect more than we need or find creative ways of working with third parties just like our partnership with Checker to kind of capture both things. What is Checker for people in the audience? Checker is a tech company that provides third party solutions for background checks and it's integrated with all federal and state databases for background checks including motor vehicle records, criminal records on the state city like court records, pretty much everything you need to know about a person and in order to let them onto a platform in the gig economy. So they also work with other ride sharing companies and with other gig economy platform companies that facilitate interactions with the real world. And Rona, you mentioned earlier obviously Uber is an international company and reporting standards vary so much even city to city in the United States. Can you talk a little bit about the challenges of having a transparent company while dealing with these many different regulators? That's a great question. And we operate in over 60 countries in the world and you can imagine how many languages, cultures, governments, political views we grapple with everywhere we operate and themes as well. So we're asked to report by regulators on so many different things like between driver earnings to content, sorry, to government access to user data, to safety data and everything else in between. And it is very challenging. I gotta admit for many of these, to my point earlier, Uber is not even 10 years old and a lot of these issues are brand new to us companies and to governments alike and there are no playbooks for most of it. So it has been very challenging and we collaborate as closely as possible with governments around the world to show them what it's really like to help draft standards show best practices from different cities or states who are kind of there at a later stage in the legislative process. But then we also, something like our safety report is voluntary and proactive. It is not something we're required to publish by law but we felt that this would be the right bold step to make and we went about creating the standard and made it available out there. So there's this and that and yeah, it is very challenging to go about different cultures and that's why we're starting in the U.S. with our safety report. We're still yet to figure out how that will play out in other countries and cultures and languages and nuances. Do other panelists have thoughts on? It's a balancing act, it depends on the culture that you're in and the regulators that you get. I've seen an experience when we've been open with our regulators about what's gone wrong and what we're doing to fix it. There are many regulators as they should embracing that concept because again, we're all going to make mistakes. These things are gonna go wrong. It's all about how we respond to it, right? But then at the same time I've seen other regulators when take a much harsher stance when you talk about what mistakes happened and I see the challenge there and I think this goes back to what we were saying earlier which is that as we push for transparency, we've gotta get everybody to understand that we're not gonna be perfect in what we do and that it's about putting those mistakes out there so that we can get better. And that's the challenge we need to keep pushing at. So we've talked a little bit about sort of how these transparency reports intersect with actual policy discussion or decisions. So I wanted to just open it up to anyone on the panel. There were instances where a transparency report resulted in a policy change or you think really made an impact on a policy discussion. Yeah, I can go. So I think over the past few years, a lot of tech platforms have received criticism for not doing enough to remove harmful content like terror propaganda or hate speech. And a lot of governments around the world have considered or have passed legislation in order to make platforms perform better as they may perceive it. And I think by issuing transparency reports which highlight the scope and scale of content removals of terror propaganda and hate speech, those platforms have actually been able to add more nuance to those policy debates and sort of demonstrate exactly what they're doing and depending on the metrics, what the impact of those efforts may be. Now I mentioned the emergency request issue. I think that's sort of the exhibit A in my view for how transparency can help to inform the public debate or at the very least disabuse people of a preconceived notion that they might have about whether we were requiring compulsory legal process for emergency requests. But I also sort of hearken back to the aftermath of the Snowden revelations when companies like Google couldn't even acknowledge that we received national security demands. I don't know that most people know six years ago, we were in this odd situation after the Snowden revelations where we could talk about these issues and couldn't even acknowledge that we got national security demands even though it was fairly obvious to everybody that we did. I spoke I think at a Cato surveillance conference in October of 2013 where we had to skirt around this issue and say, well, assuming that we receive it but not actually admitting that that's the case. And then even the next month, my colleague Rick Salgado testified for the Senate Judiciary Committee in support of legislation that would enable us to report this information. And even he couldn't acknowledge that we get these demands and it led to this kind of comical exchange between him and Senator Lay's if I'm recalling correctly. But we ultimately in the USA Freedom Act in 2015 obtained some statutory language that we worked hard to secure that enables us to disclose information about the volume nature and type of national security demands that we received. And the Department of Justice was an important partner and stakeholder in ensuring that we reach consensus about the ability to do that. One of the issues I think that we're grappling with right now with five years of experience and the benefit now of hindsight is whether it's necessary to constrain companies within the bands that we're currently reporting saying that we get zero to 499 requests with 500 to 999 users impacted. I think with the benefit of experience whether we say we get between zero and 499 requests and just throwing a number out there, 271, that there isn't an adverse impact on national security and that the USA Freedom Act itself actually addressed some of these challenges for national security by ensuring that we weren't reporting requests that we received in the last 180 days or for a new product platform or service that we weren't reporting anything until 540 days later. So I'm hopeful because we're in the midst now of reauthorizing several provisions in the USA Freedom Act. I think there's going to be an extension until March of 2020. The corporate reporting provisions that I'm speaking to aren't going to elapsed. They're permanent. But I think there are very important transparency and First Amendment implications in saying to a company, yes, you can report truthful and accurate information about an issue of public concern but you can only do it within these certain parameters. Those types of restrictions in the past have been disfavored by the Supreme Court as prior restraints on speech and even content-based restrictions. And so we think there's value in being more granular. We also think there's more granular value in being more granular with our users and the broader public about the types of requests that we're receiving so that we can say this particular provision under FISA, this is how many demands that we got. And so we're hopeful we're going to have a discussion about that moving forward. Yeah, I mean, that's really interesting. We've talked a lot about sort of what the government requires companies to disclose but I'm curious if other panelists have run into instances where you actually aren't allowed to disclose data that you think would be helpful to consumers. That's an interesting one. So we, given our size and scale, we have a lot of interesting insights about how people travel trends around holidays around changes in infrastructure. And we had a team that worked on a project called Uber Movement for two years. And it's a tool that we developed for cities and governments to research trends based on information collected in aggregate, based on Uber trips. Like we have all these like millions of probes. You know, every single second there are 45 trips happening and that's a lot of sensors like gauging how crowded things are out there if there are any particular policy changes for like gas price increase or a new flyover or a new highway or a closure or holidays or like all these seasonal trends. So back then we were a much younger company and it took a lot of working with city and state governments to figure out what would be useful for them and what we can share at the aggregate level that would help them make infrastructure decisions and give them interesting insights without obviously violating any of the privacy obligations that we have. So it's an interesting answer. And all these things come when you're a tech company and you operate in the real physical world. Alan, do you have anything? I'm trying to think of an example where we've wanted to disclose something and the government has said no and I can't come up with anything right now, I'll keep thinking. But you know, in general our stance is that we need to protect our patients' information because that's what we tell them we're gonna do. In fact, the law backs it up as well. So I'm trying to think that if something comes up I'll let you know, I can't think of anything right now. Yeah, so one sort of driving theme here is that transparency is really for the consumers and getting this data out there is a way of helping them understand your companies or your institutions. I guess I'm curious if there are examples where consumer reaction or desire has really shaped a transparency report or how you think about the reporting process. It seems like that's certainly come up in sort of how we're thinking about sexual assault data at Uber, so. Yeah, and you hear about sexual assaults happening everywhere in the workplace, on campus, in the military, on cruise ships and actually cruise ships have a legal regulatory requirement to report on crimes that happen. So you hear about all these individual incidents and there have certainly been calls from consumers, I wouldn't even say regulators and advocate organizations to share the real numbers and that's definitely inspired what we're trying to do with our safety transparency report. We believe that getting the numbers out there doesn't mean that anybody is safer that any company is safer or better. Safety should not be proprietary. Safety is something that should be a very high bar that every industry and company has helped you and that's inspired by what consumers tell us they wanna know and what advocates tell us would be helpful to this close. And we're trying to put something out there and hopefully inspire others to do the same. And I think one thing that's really interesting is not just how transparency reports drive public policy but also internal policy. Thinking about Facebook, this corporate, sorry, oversight sort of Supreme Court Board really seems like it stemmed from this idea that we need transparency reports and we need a new mechanism of regulating content. Spandi, do you have any thoughts on that? Yeah, I think the Facebook Content Oversight Board it's gonna be up and running soon offers like a really interesting new method for thinking about content moderation and providing an appeals process that is sort of independent from Facebook's own processes. But I do think we have to recognize that the number of cases that Board is gonna review is gonna be quite small. And so I'm not entirely sure that it is sort of like a magical solution to the content moderation problems that they are facing but I do think it'll be an interesting experiment to look at and see how it plays out. We do have some kind of experience with this in the aftermath of the Casteja decision in the European Court of Justice and now the right to be forgotten under the GDPR where we receive requests to delist content from users in Europe. And I think we're now in the area of 870 or something close to 870,000 requests for three and a half maybe million pieces of, or I'm sorry I shouldn't say three million maybe URLs that we've been asked to sort of delist from our search engine. And we gathered a lot of input from consumers, experts, third party organizations in the aftermath of the decision because we wanted to get this right. We wanted to achieve, I hate using the term balance, but we wanted to protect both the privacy interests of Europeans and at the same time the values of free expression that have been part of our ethos since the company's inception. So we certainly sought out a lot of input from other stakeholders that that is now reflected in our transparency report even if it's as simple as providing examples of instances where we chose not to delist content versus when we did as a result of our interpretation of European data protection law. But I will concede that it's, two companies could take a look at the very same facts and come to different conclusions about whether content should be delisted or not. And I think that's why there's an important role for the data protection authorities too to play as the sort of overseers of this right that's been created. I'm curious when we're talking about sort of overseers if anyone in the panel thinks that there should be more government regulation in terms of data that is reported or if that's something that your industry actually needs or something you're thinking about. I'll just answer this one. I don't think there should be more government regulation but what I think the government can help us do is with standardization around metrics or help us come to agreement on what the metrics are and how to measure them because again that can help people, consumers, compare across whatever groups we're looking at. And I think that's what the government can help because at the end of the day, it's common to the public. I think that there's a role for governments to play too. We talk a lot I think about the role that companies have to play in producing this type of data but we're one part of a broader spectrum and the government has the bigger picture when it comes to just you think about government requests for user data, Facebook and Google and Microsoft and Twitter and other companies who can report this data but we won't have the same type of comprehensive data that whether it's the US Department of Justice or whether it's a smaller unit of government would have to really provide more useful insights. It's entirely possible that our data is not reflective of broader practices either from other companies or from the government and so it's useful I think to have that more holistic picture and I know that there are efforts under way probably in different parts of the world but also here to ensure that there's more specificity, there's more data that's being provided. You think about like the debate we're having around encryption. I think there have been calls for, there's always been more calls for data that can help to inform the debate and understand the true scope of the problem. I have a good analogy here. So college campuses in the US are required to release safety reports including crimes that happen on campus and that includes sexual assaults reported to either law enforcement or to the Title IX coordinator on campus under the Cleary Act. And so this is a requirement by law and if you read some of these reports for some of the most reputable colleges in the US you would find zero for sexual assaults that happen sometimes. And then when you read surveys so there's the Association of American Universities survey that was released every four years. It tells you that one in every four undergraduate college women has been raped. So like not experienced lower level sexual misconduct rates and it's pretty staggering the difference. It's kind of like all colleges being like oh that stuff happens but not on my campus. Like we are sexual assault free campus. Zero is not a good number. Zero means they're not telling you the real story and they're not telling you about it. So sometimes regulations are not just the answer in isolation because it just becomes a check box. So in the absence of standardization in companies and colleges and entities and civil society holding everyone accountable to a very high standard of data reporting and a very stringent bar for transparency to release the actual image of what's really going on there and making it easy to report is really important. So as I mentioned most sexual assaults that happen on Ubers and in the world are not even reported to the police and that's because there are so many barriers to reporting serious incidents that happen in the real world. So making it easy to report holding everyone accountable to a high bar of transparency is what's really gonna drive us forward more than just regulation that's applied in isolation. And it's also important to note that these reports will go up when we publish our safety report and others do more survivors will come forward and more companies will come forward. And that's because like many issues are just under reported they're happening but they're just not captured anywhere. And that's something that we all in like as companies and industries have to be comfortable with. And it doesn't mean that anyone is getting worse at it it just means that more people are coming forward and that's good for everyone. Spandi, do we see sort of similar tensions in the social media industry? I'm thinking YouTube CEO did an interview on 60 Minutes this Sunday and one of her takeaways was hate speech isn't illegal in a lot of ways in the United States. So what do you expect us to do about it? So I'm curious if there are sort of similar questions there of maybe this thing needs to be regulated by the government first so that tech companies have more authority to step in. Well, I think in the US you have the First Amendment and so the First Amendment limits the extent to which the government can tell platforms how to moderate content. And so that means that platforms have both the flexibility but also the responsibility to create their own content policies and ensure that they can remove harmful content but then also safeguard free expression. So because of the First Amendment I don't think the government really has that much of a role there. But I think I would disagree with what you said but it is the platforms therefore responsibility to do more and to ensure that this platform is safe for its users and that they are addressing instances of harmful content as much as possible and where appropriate. And so I guess we've talked a lot about a key idea here is getting the numbers out to people but what we haven't talked about so much is how you actually do that. When I think of a transparency report I think of Facebook releasing a PDF and every tech journalist on Twitter trying to assess it out. Are there best practices in terms of actually getting this information from your company or institution to the people that use your product or the people that are your patients? I think it's a really challenging question to answer in part I think we have 12 transparency reports. Some of them are updated twice a year. Some of them are updated every day and there may be useful tidbits from the reports that we're updating on an ongoing basis and then there are questions about to what extent you, if at all you highlight the release of any particular report. There are some that just tend to get a little bit more attention whether it's content removals or even government requests for user data but that's challenging too because there's now more data and think about our government requests for user data. There's more data in that report and what we think might be insightful might not be to someone so it's hard to know whether we wanna direct people to certain pieces of information. We used to talk a little bit more about the increases that we saw but for the reasons I mentioned before I think we concluded that wasn't particularly insightful to say we got X percentage of increase over a six month basis because there are lots of factors that could shape it. You know I do think we make an effort to provide some high level information where we can but it is challenging to understand I think what it is that our users in the broader public want to see and what will be truly informative. I think this is a hard one. It's for me it has to be driven by who are so for us I'll talk about patients. We can put outcome data out there but we gotta make sure that they understand what it means so that they can actually appreciate it with regard to the choices that they're making but the other big real time I think that a lot of organizations are working on is that if a mistake is made in their care we should tell them right at that time that a mistake has been made and what we're gonna do to fix it and to me that's kind of the core piece of transparency right there which is we made a mistake we're telling you right now it's not gonna be in some report that you're gonna read about later and we're gonna tell you how we're gonna fix it and it's a lot easier said than done but I think that's where we've kind of got to get to. One is telling people real time how that information impacts them and two in a meaningful way so they actually understand what that data means for them. Yeah I think that's pretty similar like in the content takedown scenario like the user should have notice and the user should have appeals and that's their immediate access to transparency but the report as a whole and the numbers in them I feel like are primarily used by researchers and advocates. I don't think those reports are necessarily packaged with the intention of a user sifting through those data points and trying to figure out what it means for how they use the service but I would say that it is important that platforms because these reports are used by researchers and advocates are presenting this data in a manner that is like machine readable and easy to access and easy to sort through which I think has been one of the biggest criticisms around transparency reporting by platforms in general. In our world, people actually come to us with reports like unlike Ellen's world. People come to us with reports when things go wrong and we care a lot about making sure that whatever happened even if it's a wrong fare or like there was a quality issue with a car the driver was like playing with their phone like not being attentive or from very low level incidents to very serious incidents we focus a lot on making sure we get it right so if it warrants a deactivation that person is taken off the platform or made the rider or the driver that we provide them with access to everything they need. If it's a serious incident we provide a lot of resources to get help and we let them know like what comes of it too. So it's kind of like on the other side of the fence from that. You know I guess in thinking about we talked about how to present the data we talked about we need the data out there. Are there other big lessons that you would sort of point to for people in the audience or people live streaming that are thinking about creating their own transparency reports or grappling with this process for their company or industry? It's not a push of a button. It's not a spreadsheet that is generated immediately. It's not easy to classify, categorize and count correctly and deciding what to count, what to include, what to exclude. If you operate entirely in the digital world or in the physical world as well it comes with its challenges. There are so many nuances and small interactions and things that you have to decide whether to capture or not. How to classify things, how you capture reports into the system even if your systems are properly designed for that it is not something, because sometimes regulators feel like, especially regulators in law enforcement feel like why don't you just share this information anyway? Like why don't you give me a cut of my data or why don't you release it every month? And it's not a push of a button. It's not a spreadsheet. There's a lot of work and for companies and others to get it right we all gotta hold each other to, I sound like a broken record but it's really important to hold each other to a data quality standard that makes sense and adds value for everyone and solves for the issues that we're hoping to address with this. Yeah, I would echo that sentiment. There are a lot of engineering and human resources that go into the production of a transparency report and I think what I would say to any company that's putting forth a transparency report is it's a long-term commitment. If you're gonna put out a report on government requests for user data there's gonna be an expectation that you're going to continue to publish that data going forward and so having the engineering and tooling and human resources to continue to produce these reports and to grapple with sometimes some challenging issues like if an order overlaps one time period do another, do you count it as a separate order? Even if it's the same order but it just tends to overlap time periods. So those are these issues that crop up that you have to grapple with then it's not just sort of a point in time where you reach the decision I'm gonna publish this report it's also thinking six years later am I going to have the resources to continue to produce this report on the timeframe that I've committed to producing it without too significant of a delay? Yeah, you know there's two things I'd add one is what we've seen with regard to transparency for people thinking about it it is a lot of work the measurement is incredibly difficult and it takes courage to put your data out there but what we've seen in healthcare when the numbers are out there we have seen improvement we're still far from perfect but we see improvement and then the other aspect is when it comes to I can talk about what we call disclosure and apology and offer programs or communication and resolution programs these are programs where when a hospital makes a mistake they will tell the patient how the mistake was made and what they're going to do to fix it they actually sit down with the patient and work out a legal settlement if necessary in these cases and what people when the hospital started implementing these programs people feared that liability would go through the roof because patients and families would not understand how a mistake could be made and they would just take them to court and what hospitals and health systems have implemented programs like this have found they've actually seen the number of their claims and their total liability payouts drop because what they found is patients and families actually appreciate this level of openness and communication that comes with owning your mistake so we're seeing a lot more people try to implement programs like this and when you survey patients and families in fact you find that they appreciate these types of programs so that's what I always remind people about when we talk about transparency yes airing your dirty laundry as people see it sometimes can be hard but trust the public to appreciate that they know we're not perfect and trust them to know that what they'll appreciate the most is that we're trying to get better by owning it and I would say that just like creating a transparency reporter working on transparency efforts in general is a long-term commitment in order to provide meaningful transparency you should also be engaging with as many advocates and allies and experts as possible to identify what exactly can have an impact in that regard you know that's very reassuring to hear as we plan to publish our safety reports it's like good to hear that people actually appreciate it and that's also not to scare others from doing it it's just not an immediate process like you don't waive a magic fund and a report comes out it's labor-intensive it needs a lot of consultation with advocates and it needs proper investments and systems and everything but it's really important to you the idea of bringing stakeholders into this process is really interesting to me Alan and Dave and I'm wondering either of you have thoughts on sort of how you've taken stakeholders to make these transparency reports actually useful to them or taking that feedback? So in healthcare we tend to have a lot of hospitals are building what we call patient family advisory councils and anytime there's an internal debate about what's the best way to put it out there for a patient we will take it to this council and ask them what is the perspective that the patient or family would want we'll take information to them also even if there isn't an internal debate but we've realized it's about getting their perspective on it to help us get it right because we may think we know what they want to hear and we're often wrong so the key is to ask just as you just said as you just said we've got to ask them what they want because that helps us build the information and the reports the right way. Yeah we do have those discussions formally and informally with third party stakeholders and one of the things I alluded to before in terms of reporting national security demands by type not right now what we're allowed to do is report content versus non-content and that tells part of the story but not the whole story and we've gotten some input over time that this would be truly useful I think based on the existing statutory parameters that we report under I'm not sure that we can do that but you know strikes me as very valuable for the conversation that we're having to understand for example last year when Congress was considering whether to reauthorize what's referred to as a section 702 program would have been useful to know how many of the content requests that we received fall under that category of 702 requests versus other mechanisms under the foreign intelligence surveillance act where they can get content so that's something I know I've sort of taken feedback from and it's one of the things that I think we agree with provides a little bit more insight particularly as we see now Europe is very skeptical of the US governments even if they engage in the same practices they are skeptical of the US government's authority in this way in part maybe because the US government has jurisdiction over so many US service providers in ways that other countries don't And Spandia strikes me this is a big part of the process for social media companies as well we've heard a lot about or at least from what I've sort of read and reported civil rights advocates have sort of driven a lot of these processes really pushed for a lot of these disclosures is that sort of something you've noticed and sort of looking at how the transparency policies at Twitter and Facebook have evolved? Yeah, definitely, I think companies have as time has gone on have really begun soliciting the feedback of these groups a lot more which is very important considering that their services are used by people all around the world from different communities and different cultures so that's definitely a trend that they're seeing and I think that these advocates are really important in identifying where these companies should go next in terms of transparency whether it's for content takedowns or algorithms or ads so I would definitely encourage companies to continue doing that. To build off of what Spandia just said I mean government requests for access to user data reports did not exist before Google and Facebook and Twitter did like before there was user data being collected by tech companies sustainability reports on emissions did not exist before there were cars and likewise we believe our safety report is hopefully gonna inspire a new wave of transparency reports that did not exist before technology was enabling things to happen in the real world from getting a ride to finding a place to stay to getting someone to hang a painting at your house to pushing a button and getting work to everything else that is facilitated in the real world. So as more business models and companies emerge these will continue to evolve as well. And Ryan I imagine this is sort of what's coming next for Uber but Alan you had mentioned sort of with this data out there there have been tangible improvements to care so I'm wondering if any of you have thoughts on sort of how transparency reporting has actually resulted in improvements that consumers are seeing. Yeah I mean I think we in June 2014 we published a transparency report on email encryption and the use of transport layer security across providers and I guess the important thing to know is that you know if you have one provider let's just say Gmail on the one hand and Yahoo on the other and only one of them is using transport layer security it's the functional equivalent of having a postcard with the content sort of in the clear being sent versus a sealed envelope. So we published we started publishing this data in 2014 I think at the time what we saw was 40 to 50% of inbound coming into Google and then outbound emails were encrypted via TLS. We've seen that number now rise to 90% so tangible improvement, tangible outcomes as a result of doing this I think we came under some criticism at the time because we were publishing domains where the email was not encrypted and the perception was that this was naming and shaming I think our broader goal here was to ensure that people could have confidence that when they're emailing pretty sensitive material that the providers they even trusted their data to are following best practices and in 2019 I think people would be very skeptical about sending emails through a provider that isn't using TLS. So we've seen a salutary effect in that regard I would say the same is true too about the data we published on the use of HTTPS sort of sort of encrypted traffic across the web we saw we've seen increases over time as a result of publishing data about the usage of encryption among the top 100 sites for example. So we've seen some outcomes I think that have been driven in part by transparency and that's useful. Again that's reassuring to hear. I mean we'll find out when we release our report later this year but what we know is that reporting encourages more reporting and every time there is a high profile story especially around sexual harassment and sexual violence reports go up dramatically. We work very closely with a lot of advocate organizations including NSVRC and also RAIN, the Rape Abuse and Incist National Network. This job is like all about getting all the acronyms right. And they operate the National Sexual Assault Hotline so anyone anywhere in the US that experiences sexual assault can reach out to RAIN for help and local resources. And when Dr. Christine Blasey Ford testified before the Congress they saw a 400% increase in calls to their hotline for things that were already happening and that would have not been reported had the issue not received increased attention. And this is something we know will happen and we hope to inspire more improvements in actual safety measures. Acknowledging problems is just one step of addressing and preventing them and that's what we're hoping to do here. I think people in the audience probably have some good questions if we want to get it over. Just a reminder before we get started, questions should take the form of a question. I'm sure you have great comments but that's what our speakers are here for. So please ask them about their expertise. And in a moment we will have a microphone. I think this gentleman right here had a question. I think it's on. I'm Robert Shredder with International Investor. Third parties is what my question is about. How responsible are you when it comes to sharing your data with third parties to be concerned with their security? It's no help at all no matter how secure your database is if you send it to a pharmacy that's breached. So at what level do you insist that the security protocols are in place for advertising agencies or whoever you might share this with and how responsible would you be if they are breached? So I mean I think the advertising context is different in the sense of we're not disclosing user data, percent of the information to third parties in that context but I think your point more broadly is taken in the sense of when you are sharing information even for purposes that a user is explicitly authorized whether it's processing a credit card transaction or otherwise you want to contractually oblige that third party to follow the data security protocols either that you are following or that you believe they should be following whether and that may be in the form of asking them to certify that they are following certain protocols or certifications and so that's a fairly common practice and I think companies take that responsibility seriously because they know in the event of a security breach or a security incident that they can be held accountable if they aren't monitoring their third parties if they're not doing the types of things that would constitute reasonable security. This is a real issue in healthcare as we talk about as we continue to grow and you're right most hospitals in general have focused on making sure that they get it right, the security right for within their organization but as interoperability takes off and many of us I think one company has 50% of the patients in this country on one electronic medical record and so the interoperability capabilities that are actually quite good because it's the same operating platform and I think this is something people continue to grapple with and we need to sort out which is what level of permission do you require for someone to be able to get, let's say we have a famous patient in our hospital if there's another hospital that wants to access this information what are the protocols that we require? There are some basic ones put in place, for example that patient needs to actually be registered in your hospital at that time before you go requesting information because otherwise you shouldn't need it, what level of permission do you require and these challenges are always gonna be here but I think these are things that are being worked out as we continue to work on interoperability and I think it's a big issue, yes. And I was just gonna add it gets interesting when you think about the distinction between liability and things you would count. So for example, we have car crashes where a third party drunk driver killed a driver and rider. That is not something that the Uber driver or rider, the Uber user was liable for but it's a safety incident that we would count and capture. I guess I'm curious just jumping off of that, Alan you talked earlier about sort of this fear that if you gave patients more data they might become more litigious. Is that anything or something other panelists have run into this idea of like if we put more data out there that's gonna put us at greater reliability or sort of internal concerns within the company. I don't wanna hear that. I actually don't know of other industries that have put their data out in this regard with putting their member out because we're not talking about just putting global reports out. This is actually telling a patient you made a mistake so you're actually admitting, you're making it pretty easy for a lawsuit to happen if this happens and I don't know of other, I mean I guess that's what happened with the, happens with the airline industry when planes crash. They tend to pretty quickly own it and settle but I don't know about other industries where it routinely happens when you're right. Well I mean I think about, I was mentioning before user notice you think about something like that providing that notice that the government is looking to obtain your data. That can reduce the likelihood of liability in the future because you're enabling that user to avail themselves of rights that they might not otherwise have or be able to exercise in a meaningful way. If you're not providing that kind of notice and it's only further down the line where they discover that now they're the subject of this criminal investigation they may be charged with the crime and yet they haven't had the opportunity to protect their rights. I mean that is just a broad example. The rules are what they are and the laws are what they are around user notice but it's an example I think of where actually empowering people with information can actually maybe in a counterintuitive way reduce the likelihood of future problems. And what I just add to that is in some ways the liability piece should be irrelevant because it's about doing the right thing. If you've made a mistake and you need to make someone home you should be doing that regardless of what the laws say. Knowing what happens to the liability of course is important but we should remember that if we're guided by kind of our values here it's about just owning the mistake and then making it right. Hi, Sharon Bradford Franklin with the Open Technology Institute and thank you all for a terrific panel. So you all have talked quite a bit about the value and importance of transparency reporting for accountability to the public and even causing positive change which I applaud and I'm glad to hear you talking about it being the right thing to do and want to encourage you to continue talking about that but my question is as you and it's particularly I guess for Rana and David and Alan as you look to try and encourage others in your field or other fields to take on this step where they're not doing to engage in transparency reporting. I would think there's also a case you made of it's good for our company. This is also a good it makes business sense for us and one example that comes readily mind is and David alluded to it is after the Snowden revelations a lot of the tech companies said we wanna be able to show the extent of government requests because although the numbers may be fairly large as a percentage matter it's a really tiny percentage of our customers and we wanna be able to put that out there in context to reassure people. So that's one way in which it made good business sense for Google to do the right thing. So I'm wondering if each of you could talk about how you might encourage others in your field to take on this step because it's really makes good business sense for your organization. I can, and like I talk about that all the time a very good example here is the airline industry. Safety standards are very standardized across most airlines and at least within the US. You don't choose to fly a particular airline because you think it's safer than the other. Safety is not something that companies should compete on. Safety is a bar that all companies should aspire to meet and do each other on and hold everyone else accountable to. So there's definitely a case to be made about putting information out there to raise the bar on yourself as a company and on everyone else that operates in your industry. Yeah, I think opacity actually can lead people to make conclusions that probably are untrue and that it's in a company's interest to be transparency. If for no other reason than they don't want people that are skeptical of what they're doing to make conclusions that are untrue. And if you're a company that's handling user data from a large number of people it's in your interest to be transparent. And in some cases to either let the number speak for themselves or, you know, enable a conversation that you couldn't have otherwise because you may be vague assurances but you don't have the data to back it up. I think it's about leading by example. I just joined Hopkins a year ago and I feel lucky to be part of it but Ron and David are part of leading companies here and I think if the companies that are seen as leaders are doing something I think other members in the industry will always look to see how it's working. And if we all truly believe in the transparency to be building trust with our patients or our customers I truly believe it will drive more business and better word of mouth. And again, that's how I think we can lead and that's what we need to do. Any other questions? All right. All right. It's almost a yes no. This could be easy for you. Google's undergoing a big change in management. Do you have any reason to believe? No. Any reason to believe that it could affect any of these policies? No. Thank you. Quite a significant change, I don't know what else to say. I mean I guess this is not so much about specific changes in management but I'm curious if there are any takeaways in terms of we're talking about you guys as leaders in transparency reporting and pushing other companies to do it. If you've noticed in starting this process at Uber that maybe Lyft is thinking about this more or Google's process is pushing these other companies and just sort of if there are new trends in corporate governance or sort of I guess more of a trend towards these sort of practices. That's interesting, I would actually be curious about what the response would be from maybe a smaller company in the internet space whether they feel that these pressures exists either because larger companies are doing these types of transparency reports or they're hearing directly from folks that they want to know. I mean TikTok is kind of a different kind of animal here in the space but you think about a company that can grow very quickly and have a large number of users and be handling a lot of user data. It would be an interesting question I don't mean to punt this but to pose it back to them to see what types of pressures that they have now that there's an ecosystem and in some sense is an expectation that's been built around companies that have this type of data to be publishing these reports. Yeah, I mean TikTok's such an interesting example because they are so opaque in their content moderation practices and it's created more suspicion about the company. Like it's the Chinese government telling them what to do and without that transparency there is I think as you mentioned sort of this idea of jumping to conclusions about what's going on at the company. We actually did a report a few years back on getting internet companies to do the right thing and one of the aspects that we looked at was transparency reporting and we found that it takes often like a leading company to start that practice but then it also will take some sort of a crisis for a particular company to decide that it's worth the risk and worth investing resources into creating that sort of transparency effort as well. So I think some smaller companies may not have the resources to take on those kinds of efforts but some of them may also just have probably never really fallen under that scrutinizing eye and so it may not be as top of a top level of a priority as well. Any other questions? Oh, we have one. Since you brought up funders influencing content, what about the Sanders claims that he's being marginalized again like he was four years ago? Are the, your advertisers and the funders influencing election coverage? Sorry, could you repeat the question? What about TikTok being influenced by Chinese government but what about the funders influencing election coverage? Like, I guess Facebook and Google has paid ads but just generally marginalizing Sanders again like he was four years ago. Not sure if anyone on this panel works on content moderation directly. Span it on a few thoughts on this but there might be a little outside of this panel's expertise. The question's about influencing elections coverage and like skewing content. Content generally because you guys brought up TikTok being influenced and what about the U.S. companies being influenced by their funders? I mean, I think that does raise interesting questions. One thing we didn't get to was this idea of transparency around political advertising and that's very new for Facebook and Google and Twitter and something that I think social media platforms are grappling with and it's not something that's mandated by the government. There are a few pieces of legislation that are trying to mandate that sort of transparency but for the most part it's been a voluntary process and the extent to which some of these companies are successful at it is debatable but David, I don't know if you have thoughts as Google takes on this process around transparency with advertising for political ads. Yeah, I mean I think we're relatively new as our other companies to this space. I think it's the 12th of 12 transparency reports that we're now publishing and we're, you know, I think we're trying to update the database of political advertisements that are housed there fairly often and I think, you know, some folks have pointed at things that we can do to improve and I'm sure we're gonna take all of those suggestions into consideration. There are really big challenges in the space bigger probably than the panel right now can address but just even, you know, focusing, is it right just to focus on ads that mention candidates, you know, as opposed to focusing more broadly on issues but then where you draw the line between, you know, whether an ad is really focused on an issue or not. So there are certainly large challenges in the space. We're trying to rise to the moment to address them but I think this is gonna be one of those reports and one of those issues where everybody's gonna evolve as a result not only of feedback that we're getting but as we see, you know, different efforts, I think, not only to advertise in the political space but even to circumvent the rules of the platforms. Any other questions? Span, did you have thoughts on political advertising as someone who looks at social media? Yeah, I mean, I would echo what David said. It's very difficult to, you know, draw lines around what is political ads and what are not but I'd agree with what you said that a number of the transparency efforts that are out there right now are a good start but there's definitely a lot more to be done in terms of holding platforms accountable around how they use ads and how their ads are targeted and delivered. And I think, you know, that being, I guess the 12th Google report points to the fact that transparency reports are evolving and as these companies grow, there are new metrics to look at and, you know, maybe before 2016 no one really cared about political ads that much or as much as we do now or 2012. So I guess I'm curious for the panelists if you have any thoughts in terms of, you know, how do you look for that next transparency report or that next data point that you should be like reporting to consumers? Like is it an ongoing discussion with your teams or, you know, does someone at the company bring the idea to you? How does that work? I think like I mentioned before, it's a lot of it will be driven by the public and what they wanna know and based on like the existence of an ongoing crisis and a need for greater transparency as a result of that. Yeah, I would like to echo the need for greater transparency. Like I can close by saying that, again, zero is not a good number and reporting on serious issues and sticky issues such as content moderation and how like you influence the online experience but also things that happen in the real world and what we see are gonna be uncomfortable and companies and entities must push through that discomfort because it leads to the greater good. Well, if we don't have any other audience questions, do other panelists sort of wanna offer some closing thoughts? I would say that it's great that we're seeing a number of companies engage more in transparency reporting and transparency efforts these days and I would continue to encourage them to engage with civil society and advocates to ensure that these efforts are more meaningful and more robust going forward. Yeah, I mean, I could just add that the general principle here is telling people what you do and what the outcomes were and who can argue with that. It is wanna point it out, it can be super uncomfortable to do but it is the right thing to do regardless of what it means for other ramifications and we should just remember that on a principle basis. Completely agree, I'll also thank OTI for bringing transparency to the issue of transparency reports because I think it's useful to have an opportunity to dialogue around these issues and to take stock of where we are and also to invite folks to contribute their ideas about ways that we can improve transparency reports and even if there are new ones that we ought to be producing that's always a good input to receive. And I know as a part of this panel, Spandi will sort of be taking the takeaways and coming up with a white paper down the line so that's something everyone should look at from the Open Technology Institute. I just wanna thank all our panelists for being here today and thank you guys for taking time out of your busy days to come see us. Yeah, that's it. Thank you so much. Thank you.