 Hello and welcome. Thank you all so much for being here today. I'm Jessica dear the director of ranking digital rights and all of us at ranking digital rights in New America are so glad you could join us for charting the future big tech accountability. This has been another dizzying year for those of us watching digital platforms with very little certainty about what's next. Since our ranking, since our last ranking, we have seen the fallout from the events of January 6 at the US Capitol in terms of prosecutions and also what platforms can do to better prevent and address the abuse of their platforms. Vaccine disinformation became the next chapter in the global COVID pandemic, causing more lives and adding to the growing distrust of our institutions and platforms. New legislation in China, Europe, Russia, India and elsewhere will have wide reaching conflict, wide reaching impact as the disparate visions of how to achieve digital transformation and preserve national sovereignty threaten a global interoperable internet. Whistleblower leaks affirmed what so many of us in civil society have suspected for years, regarding just how much Facebook knows about the harms it contributes to, whether by amplifying feelings of inadequacy among teen girls facilitating illegal and human drug trafficking through ads or not allocating enough resources to effectively moderate content in most of the world. Apple and Google announced big changes to the way they will use our data to target ads, perhaps better protecting privacy, but also solidifying their monopolistic positions. And Facebook has announced its plans or meta to move us all into its metaverse. And since February 22 of this year, tech companies in a wide range of other companies to have used their economic and information power to aid states and imposing sanctions on Russia's government, and oligarchs in Putin's in Putin's ongoing illegal and inhumane invasion and occupation of Ukraine. And if that's not enough, just last week the world's richest man made a bid to purchase Twitter, throwing the company's future into doubt and reminding us all once again that these companies are not democracies or public squares. But kingdoms under the control of a very few wealthy individuals whose accountability to their users their shareholders and to society often appears in the guise of slick PR campaigns or exorbitant lobbying spends or as an act of benevolent. Can we call it techno dictatorship. Today our esteemed group of panelists have kindly agreed to take the time to pause with us after the launch of our 2022 big tech scorecard last week to reflect on the results of our ranking and the implications of corporate power in the tech sector for our collective future, and the future of big tech accountability. Every year at RDR we try to make sense of an increasingly complex array of company policies and practices and their potential impact for our fundamental rights to privacy and free expression against the backdrop of world events that not only implicate, but are also influenced by corporate power and particularly tech power. Or as our company and engagement manager young Ridsack puts it we link companies transparency or lack there to real life harms. We do this by evaluating the publicly disclosed policies and practices of the world's most powerful platforms and telecommunications companies, according to a set of 58 standards that are grounded in the universal declaration of human rights and the un guiding principles on business and human rights. We then use those evaluations to calculate scores and produce our annual rankings. Next slide please. This year we have split what was formerly known as the RDR corporate accountability index into two parts and renamed our rankings as the big tech and telco giant scorecards. We use this to dive deeper into each type of platform we evaluate to create more time throughout the year for us to for us as an applied research up outfit to work with partners to put our methods and data into action, and to give us not one but two moments in a year to hold companies account publicly. We have also made several enhancements for rankings this year. We have also linked to our rankings in the bottom right hand corner of the event page for this event, or go to ranking digital rights.org. And we've expanded the highlight section of our individual company scorecards to include more context on each company's performance and added and added new metadata such as share structure. We've also added new ways to view our data and the data explorer section of our website. We have a new look at our data called lenses that aim to offer insight into specific areas of interest such as algorithmic transparency and targeted advertising. You might also notice that RDR has a new look we have new brand and new visual and a new website. Next slide please. Thank you for our incredible panel we thought it would be a good idea to review some of the highlights from our recent scorecard. But first I'd also like to express our immense gratitude to our funders without whom none of this would be possible. And they're mentioned here on the screen and also on our website. So thank you very much for your support. I'd also like to remind you that RDR takes no funding from any of the companies we rank that we may rank in the future or their competitors. And with that in mind, if you find our work valuable and you're able, we kindly ask you to help it make it more sustainable by making a donation. There's a donation button on our website. Next slide please. Let's get into it. Today we'll share a few of our favorite findings, but with the data set that gathers information on more than 300 aspects of policy and practices for both companies and their services. You will no doubt find many others when you peruse this year's scorecard on your own. Again, you can find it at ranking digital rights.org. Next slide please. So here are the results. The top level findings. The headline, unfortunately, is that Big Tech keeps failing us. For the sixth year, no digital platform that we've ranked has earned a passing grade. While we see incriminal progress, this is no time for business as usual, and companies must do better to protect their users and the public interest. You'll see that US companies still dominate the top half of our ranking with only Korea's cacao breaking into the top seven and tying Apple for sixth place. You'll also see that the Chinese and Russian companies that we rank along with Samsung and Amazon round out the bottom half of the scorecard, though they do, some of them do post significant score improvements. Next slide please. Top to bottom Twitter again took the top spot. This is for the second year in a row for its detailed content policies and public data about moderation of user generated content, but it still has a lot of room to improve. Amazon ranked last again, but did post a notable score increase. The Chinese behemoth 10 cents. Amazon earned our lowest score among all social media platforms we rank for all platforms we rank on our standard asking companies to explain their processes of enforcing their own content rules, which is an area where many platforms don't do well. That's one of the newest improvements and for the second year, and saw its own, its overall score decline due to outdated policies on notifying search service users of content restrictions and encryption for Gmail and Google Drive. Next slide please. We like to look year over year we're one of the only or the only initiative that collects data, longitudinally year over year and we like to compare from year to year to see who has improved of course. And this year we're pleased to see that 13 of the 14 digital platforms that we evaluated have made some progress since our last index which was released last February, February 2021. The most improved was Yandex Russia search giant on their its overall score, although some of those improvements may be in jeopardy given recent events. And Amazon as I mentioned also improved its internal governance mechanisms to protect privacy, giving it a respectable score increase. The most improved again is Google for its decline and Samsung, which basically maintained status quo for next slide please. It's also interesting to look at our year on your data to identify where most of the improvements are coming from and with some exceptions. What we can say is that many digital platforms that are headquartered outside the United States have led year over year changes both this year and in the previous year. Chinese companies by doing 10 cent both gain nearly three points this year. And as we mentioned, the Russian search giant Yandex gained 7.6 points thanks to policy improvements and all three of our categories which are governance freedom of expression and privacy. We're disappointed, however, to see that human rights due diligence at the companies is still falling short where the arrow is pointing is a combination of the scores on one of our indicators that measures. Disclosures around human rights due diligence and unfortunately while some companies do well in evaluating government regulations for the human rights impacts that they may present. They don't do very well in evaluating the other three areas that we look at, including policy enforcement of platforms own rules, their algorithmic development policies, and their targeted advertising practices. Next slide please. We also like to call out that we're seeing that that that data breach protocols we still aren't seeing a consistent response on these and unfortunately or interestingly both Amazon, our bottom company and Twitter our top company disclose absolutely nothing about data breach protocols. And all we ask for is doesn't the company clearly disclose that it will notify relevant authorities when a data breach occurs. Does it disclose its process for notifying the people who are affected, and does it disclose what kinds of steps it will take to address the impact of a data breach on users. We think that this should be an easy place for companies to improve. Next slide please. Companies are also stonewalling on algorithms. And I'll point on the left hand of the slide is our chart our lens, one of our new features this year on algorithmic transparency, which combines several indicators from our governance free expression and privacy categories to give us a snapshot of how companies are doing on a specific issue area. And here we can see that no company scores above 22%. In addition, on the right hand side, you can see a graph that you can also look up on our website if you go to our indicator section, looking at a specific indicator on access to algorithmic system development policies. And I think the outcome is pretty clear only Microsoft earns any credit on this indicator, and it's very little. Next slide please. Similarly, we have little progress on targeted ads, both last year for 2020 index. We debuted new indicators on both algorithms and targeted advertising. So this scorecard is the first opportunity that we've had to be able to compare to that baseline that we established last year and we're not seeing very much progress, especially given the amount of news and headlines that have been made around how algorithms and targeted advertising are both creating harms. So for the second year in a row, none of the 14 companies we rank earned more than 50% of the possible points on targeted advertising indicators and not a single company has announced a comprehensive human rights impact assessment of the mechanisms it uses to tailor ads to its users. So we see very few companies publish any data in their transparency reports about advertising content and advertising targeting policy enforcement as you can see in the graph on the right. And this is something that we really believe needs to change. Next slide please. Finally, before we go into our panel. I wanted to talk a little bit about some of the ways in which we're taking our data and our insights from our data and turning it into action. Along with our scorecard this year, as we've done in the past, we've published three companion essays that take our data and frame it from the perspective of actions that we think we ought to be taking to advance the movement for big tech sustainability. The first one is to ban dual class shares and our company and investor engagement manager, John Ritzack has written a piece talking about dual class shares and the disadvantage that these stock that these share structures create for shareholders and also outlines a lot of the work that RDR and others have been doing with investor coalitions to raise issues of human rights under environmental social and governance frameworks. In the middle, we've identified a focus on ad tech. Drawing on and building on our 2020 it's the business model reports where we also talked about ads and the need for databases. And Natalie Marshall, our policy director has written an essay about how governing ads better and would actually help us better govern the Internet. It's an excellent read and frames a lot of the work that we will be working on that will be doing an RDR. And then finally our third companion essay written by Gia Zhang is about company engagement from the Chinese companies, which are there we rank three Chinese companies, none of whom have engaged with us during our feedback process. Our research methodology incorporates a significant amount of company engagement by sharing preliminary results and asking for feedback and sources from the companies we rank. But to date we haven't heard anything from the Chinese companies and she gives us some very good reasons why I'll also just note that the only other company that we haven't heard from is Google this year. Finally, the, well, these are some of the ways that RDR sees the future of big tech accountability, but I'll stop here so that our panel can dive into this more. And now I'll turn it over to Dr Natalie Marichal, RDR's policy director who will introduce and moderate our panel. Natalie is an internationally recognized expert on digital rights corporate governance and corporate accountability. She has built our policy team over the last year from the ground up. And she was the lead author of RDR's it's the business model report series from 2020. She is one of the US House of Representatives in the US International Trade Commission and holds a PhD in communication from the Annenberg School of the University of Southern California and lives in DC. I'm really pleased to introduce you Natalie and for you to take it away. Thank you Jessica and thank you everyone for being here. Let's jump right into it. We have our panelists, full bios on the registration page, but I will tell you just a little bit about them. Sarah Kutuhietano is an expert in corporate research and shareholder engagement. She leads dialogues with Canadian and international companies to advance ESG issues, including human rights, decent work and corporate lobbying. She's published several issue briefs on current shareholder and policy topics using her insight from her background in non financial auditing. She is a co founder of Accountable Tech. He has a decade of experience and political communications and issue advocacy, including serving as the foreign policy spokesman for the Clinton 2016 presidential campaign, where he was part of the team managing the response to Russia's information warfare operation. Chris is president and CEO of public knowledge. Before becoming president and CEO Chris was the vice president of PK from 2012 to 2019, leading the organization's day to day advocacy and political strategy on Capitol Hill and with the administration. Nina Shamilovich is an expert in human rights and technology, a lawyer and an activist. She's a co founder and the president of the Panopticon Foundation, a Polish NGO defending human rights and surveillance society and one of the leaders in corporate accountability in the EU. Last but certainly not least, Sophie Zhang became a whistleblower after spending two years and eight months at Facebook. During that time, she tried but was not successful in efforts to fix the company from within. She personally caught two national governments using Facebook to manipulate their own citizenry while also revealing concerning decisions made by Facebook regarding an authenticity in Indian and US politics. As you can see, we have a really illustrious panel here who's been deep in the trenches of corporate accountability from a variety of angles. And I'm really, I'm really excited to chart the future of our more of our movement together with you all. Jesse, let's start with you. So you're the co founder of accountable tech, which is a campaigning organization working to bring about long term structural reform to tackle the existential threat that social media companies pose to our information ecosystem and democracy. So what's about what led you and your co founder and Nicole Gill to focus on this issue and what you think this movement has accomplished so far. Yeah, thanks so much for having me today and I think that you hit the nail on that even even in your question when you say an existential threat. And that's really how I've come to view disinformation and the current information ecosystem that we live in, in which there is no shared consensus reality no shared baseline of facts. And social media I think has been certainly not the soul, you know, social media platforms to invent disinformation or polarization or racism, or extremism, or echo chambers, but they serve as a as a unique accelerant on each of those fronts. And as the fabric continues to fray, and we lose that ability to have cool cool headed conversations to have policy focused conversations to have fact based conversations. I think democracy is is day by day at at more and more risk. And so we felt, you know, that this that this was an issue area where I think people are starting to recognize that on every on every issue or they want to see progress, you know disinformation and the information ecosystem serve as a serve to thwart that because it is this intersectional issue it's very hard to win arguments or have a functional democracy or have productive conversations. If you cannot even communicate facts, if you can't, if you can't reach people if everything is that is being sort of filtered and warped through a lens of a few dominant platforms which are built to optimize engagement, which often means amplifying the most toxic things on the platform, and putting it in a way where that's micro targeted to each person to play on their personal biases so you have this dynamic or simultaneously global and ubiquitous but also unprecedented and how precise and personalized everything is. And so I think, you know, we have done everything in our power since we thought about this, and stood this organization up to try to fight on all fronts because there is no silver bullet to this. Interrelated problems but I do think, you know, we've, we've pushed for direct corporate accountability trying to call out and educate and educate the broader public on some of the, you know, fundamental flaws that that were worried about with the dominant social media platforms, we've pushed for, you know, legislation and education in the US, we've worked with our friends in Europe and I think I'm sure Kasia will get more in depth in on the DSA and DMA that just are making their way through Brussels but really exciting to see how comprehensive those proposals are in some of the fundamental harms here. And, and we're seeing progress at the state level as well. When you were just just yesterday age appropriate design code in California advanced and so I think even with the level of fluency that members of Congress are talking about these issues compared to where they were a few years ago. The progress has been has been really significant and so there's certainly an enormous amount of work to do but I do think we're making progress and I'm very grateful for everyone on this panel yourself included for helping to drive that. Thank you Jesse. So, so Kasia, before we get into the details of the DMA and DSA. How does what Jesse said compare with your experience in Poland and Europe more broadly can you reflect a little bit on where our movement has taken us so far. Yes, I will do my best in the short time we have. Truly, I feel we live in an interesting time for regulating big tech. Last five years in Europe has been, we have witnessed increasing political support for deep reform. If you go back to what we heard from European Commission, leaders like Brito, Ursula von der Leyen at the beginning of their term, they clearly attack the very business model behind, behind big tech engagement based business model and advertising technology all has been clearly set as target for regulation. The being big itself has been seen as risk and something that you not only should react to with a number of pro competition pro consumer cases but even preempt with proactive regulation. So that movement supported by by whistleblowing supported by cases like Cambridge Analytica we still remember that right we have new cases since then but that that that that fair with Cambridge Analytica has been I think pretty influential here in Europe in informing the movement. So on one hand, we have seen incredible movement of policymakers towards critical agenda. On the other hand, if you look at the goals set for the reform where we are witnessing today the DMA and DSA together as a package. Well, there are obviously two legs. One is people's empowerment through new tools, the new tools and safeguards, and I will go back to this how much worth that is. On the other hand, there is always always the economic liberal narrative present. And no surprise that the deeper we go in in the reform the longer it takes it took two years to work out details that bigger the impact of the market logic. It said but it's also realistic to say that after two years in the making that regulation has been in to a great extent influenced by big text lobbying and the most revolutionary aspects of it, the biggest promises have not been implemented at the end. So the EU is obviously much further ahead than than the US in regulating big tech. Having recently finalized both the Digital Markets Act and the Digital Services Act though we won't see the final text of the DSA for a bit longer and maybe you can help us understand why that is, because I know that for for a lot of our audience and for me, policymaking in Europe is a bit of a mystery so maybe you can help us unpack that a little bit. Let's talk about these two pieces of legislation, and how they how they change the fight for big tech accountability, and maybe give us a short preview of what's coming next in Brussels. Yeah, truly it is complicated honestly we as civil society lobbyist only after being in that process we understand what's really going on there. Very long story short, there are three key bodies involved in the process European Commission responsible for proposing the reform European Parliament, which is usually seen as the most progressive body, at least for the sake of the broad representation of various societal concepts of on how to regulate so we always have the left and the middle and the right with the middle being the strongest voice. So Christian Democrats still dictating more or less the mainstream, and we have the council, which is the representation of governments, and again here the whole variety of opinions positions with pending politics being incredibly transparent, needless to say that conflict in Ukraine has open certain gateways that seem close and close the other problems that were important a year ago. So this is all pretty dynamic, we have to observe that and we know not not not always we can because part of this process is extremely non transparent. We have to see the council and in the trial of the trial is the moment where the tree come together to negotiate the final shape of the legislation. This meetings are mainly technical meetings were only experts said, and they are not expected to leak out about the decision which usually happens. So we can predict what will be in the final legislation but officially, we have to wait a month longer, maybe three weeks longer. After the end of official negotiations to see the final text after the technical people sit down and basically type put on paper what has been discussed behind the closed door. So we have whatever we say now these days is based on leaks is based on assurances that we received from various stakeholders being more or less public about the process. So basically, the negotiating the meeting where they negotiate just lasted until 2am, a well past midnight, but around 12 in the morning, Commissioner Breton already published on Twitter, the whole stream, explaining what has been one. So the intransparency, the lack of transparency does not prevent PR from happening as usual. So, this is how it goes. People comment on this reform without really seeing that the text yet. Okay, so maybe maybe it's best to hold off on a deep analysis until we actually see the text, then. So what about the US. Chris what's the state of play here and what can civil society do to pressure policymakers on this front. Is there might we actually achieve some degree of tech accountability through legislation or regulation in the US this year. What do you think. It's challenging and congratulations Natalie and Jessica on the latest report is fantastic work. You know, I'm, I'm optimistic in the long run I'm pessimistic in the short run I think we're further behind as you as you noted we're further behind Europe in really understanding where we want to go with accountability regulation in the US and so I think we need to pick up a pace. Unfortunately, what we've seen in the United States and in Washington so far is a lot of focus on small one off fixes to specific things that that legislators have seen in the news. So some of these, you know, we support public knowledge and I know others here also support, but we what we aren't seeing is a real framework approach, like we're seeing in Europe, and in the long run that's where I think we need to be. And so hopefully what we'll get out of some of the one off proposals that we're seeing around privacy and around competition policy and antitrust and algorithmic oversight. Hopefully that will, will, you know, form the basis of how people understand what accountability should look like, and we can move towards more of a framework approach that we're seeing Europe, that's my hope but there's some real challenges that we face. Unfortunately, I think some of the biggest challenges we face the United States are are really a political and ideological, given the atmosphere in Washington these days. The ideological divide means that it's very difficult for folks to agree on things and that seeping into a lot of the debate. So, you know, for example in the United States, we, we know that we will face and take accountability the challenge of making accountability work with the first amendment and first amendment protections. But unfortunately, we've also at the same time over the last few years seen a real breakdown of the consensus in the United States of what the first amendment means, what free speech and free expression protections are and what it could be. And that a lot of that comes out of broader political fights that are really not related to tech policy per se, but, but unfortunately, it's impacting where people see how people view harms online and what solutions they'd like to see. And I also run the challenge because so many of the companies that we want to count accountability around our base United States there's there's national pride involved. And so, often when proposals are put forward, you'll hear folks say, Oh, we can't do that because it will hurt the US, or it will hurt the US companies and our competitiveness broadly. I think that's very short sided. And, and hopefully, we can, as civil society, help build back our consensus on what the first amendment what free expression is and should be also build consensus on what, you know, basic understanding of accountability like. That's difficult. But, but it's really the challenge in front of us to, to bridge some of the ideological divides that we're seeing in our country right now to build a conventional wisdom around some of those ideas, if we can do that, then I think we can get to more of a framework approach. In the meantime, I think we're going to see a few smaller bills go forward things that promote competition. You know, bills around self preferences and non discrimination bills, hopefully around platforms like the app store or broader interoperability. There's going to be a push this summer around privacy, but whether parties can come together and agree on what those enforcement structures look like is unclear yet so we have a lot of work to do and, and I think civil society, we have a lot of work to help folks who may differ on various issues realize that they have the same issue at stake here the right to to freely express themselves and have have safe communications online, even when they disagree with each other so that's the real challenge in front of us civil society. Yes, and I'm glad to hear you say that you're optimistic, at least in their long run, you know, I have a lot of conversations where people who with people who are very pessimistic at all runs of time and, you know, for me, I personally think it's as as as as as activist to choose to be to be optimistic because if you don't have that optimism that that you can win. You lose the will to fight right and I think that's the most interesting for any social movement is to give up, give up the idea that you can win before before you've even before you've even tried. Thank you. Shifting gears a little bit away from civil society advocates into different kinds of of change makers. Sophie, I'm particularly glad that you could join us today for this conversation because you're one of the very few people out there who has worked inside a big tech company has left and can speak openly because you're not bound by a non disparagement agreement. Since you turned down a pretty hefty severance package from Facebook. Tell us why you decided to speak out about your former employer. Thank you, Natalie. So just to be clear, I am bound by one non disparagement agreement that I signed when I joined Facebook. I refuse the one I when I left so I would be breaking one rather than two. Ultimately, Facebook has this hasn't suit me because it would look terrible for them and also admit to that everything that I'm saying is true. I would agree with that rather than me turning down the money. So, so anyways, I worked at Facebook for for for two and a half years in my in my time there. I caught two national governments red handed that we that we that we breaking Facebook's policies on vast scales and to set up fake personas reporting to be their own citizens to to mislead harass and otherwise repress the federal These were very clear cut cases in which there was there was absolutely no moral moral nuance. It was very no no one was defending these on these cases on the merits. In other cases, you can say there are real questions at stake. What is the right decision here. To be know for certain but but none of those were the case here and Facebook still took almost a year to act in the case of Honduras, more than a year to act in the case of Azerbaijan. Ultimately, and ultimately, I recently that I was doing this was only my spare time. This wasn't my actual job. No, this was no one's actual job. I had no special training in this area. I'm certainly not a super genius. And the reason that I some random person out of out of grad school at her second job was able to fight catch two national governments right handed with no training and no expertise and not being a genius is simply that they were the low healing fruit. No one had bothered to look at them before so they could be lazy. Ultimately, you can't fix a solution without without no without knowing that you can't fix the problem until you know it exists in the first place. And right and right now. Many issues, only Facebook knows precisely what is going on within Facebook the platform. And I don't think it will be a surprise to anyone to say that Facebook is a company is going to make money. And at the end of the day, we don't expect to find a Morris to have a division that tries to make cigarettes that's addictive will find it Morris to have a division that reimburses Medicare every time someone gets lung cancer. The very idea is a bit ludicrous. Imagine a road in which Philip Morris knows that that cigarettes give people cancer but Philip Morris is the only person who knows, and Philip Morris is only group that has any chance of finding that out. In that situation I think it would be very important for someone from within the company to come forward. And so that's precisely what they did, and I'm still doing today. Well, thank you for for your whistle blowing and for your activism, Sophie. One of the things that one of one thing you said to me when we talked last month that that I thought was really interesting. And that I'd love to hear you talk about some more today is that when you brought these concerns to your managers. They use the rhetoric of users rights to resist taking action against these people who were using including government officials who were using the platform to hurt other people in various ways. It's true that in the early years of the digital rights movement, we were really focused on protecting free expression and privacy for platform users, and perhaps not thinking enough beyond that. What though I think I think, at this point, the conversation has caught up to that. What kind of messages would you like to see do you think it's important to see from civil society groups that for us to be sending to companies and to policymakers what should we be asking for. So just to just to again first provide context. So so so when I brought so when I brought these cases up to leadership at Facebook, and often often they were concerns about taking precipitous action without warning without warning people first because because because in terms of fundamentally users rights. It's about protecting users from the platform. And, but that can become a problem, but when users themselves are the platform. I mean both both. I mean, both are valuable initiatives in the same way that for instance, but police advocates and police reform advocates are both valuable initiatives that are naturally at odds. Giving suspects more warning before before arresting them such as the Miranda rights has in some key has in has has reduced false confessions and read and and help protect people from the police, but they have also made it harder for the police to catch people and that's the analogy that I'm going to use very broadly here. An additional facet is that the people is that at Facebook, the people who judge cases the policy maker, the policy staffers are the same people who are charged with also lobbying governments and political officials and essentially getting them on the good side, which is a very different paradigm from that in the enforcement, etc. In the United States, if a judge record upon the trial case and it turned out that they went for weekly non trace was a defendant, they will be required to recuse themselves I hope at Facebook, it would be only be a problem if they didn't know the defendant. I'm being a bit flippant but I think that gets me pointed across. And, and so Facebook had incentives to protect the, the, the important and influential from its own systems, and you, and you, and you read under the rhetoric of, of not, of not taking precipitous action, giving people fair warning, etc. And, and like you said that goes back to the initial viewpoint on accountability for attack platforms that it was about accountability for that form and protecting users from them. Like I've read, I've read the criteria that, that, that RDR uses and my understanding that most of it is focused on the platform's own transparency about what, about what metrics it takes against users, what, what protections users have in terms of privacy in terms of enforcement, etc. And, and so, and so right now it, I don't, it doesn't, it doesn't do much coverage of the other facet which is protecting, protecting users from other users, protecting, protecting users from foundations of platform policy that are not being enforced or carried out, which I believe is equally important and now there is not much, there's not much transparency or visibility into this. So frankly it's something that I believe would be a good idea for RDR and other similar transparency groups to do, would be to essentially do red team stop penetration tests. So what I mean is for instance, I mean, these would have to be done carefully, carefully, because if you go at it along I'm sure Facebook will find an excuse to ban you, or etc. But in principle, but in principle, accepting those sorts of issues. If you want to, if you want to know for instance how good its company is at taking down fake accounts, the best way to do it is to in controlled test circumstances set up your own fake accounts and see how many of them are actually taken down by each company. Afterwards, we set up 100 networks of fake accounts on Facebook, Twitter, Reddit, TikTok, etc. Facebook took down 10 out of 100, TikTok took down 1 out of 100, Twitter took down 2 out of 100, they're all terrible but Facebook is at least terrible and making up these numbers obviously. So the approach could be used for instance, if you're concerned about hate speech, you could think to control circumstances set up hate speech, see what percentage of it is taking down you could see response to user reports, creating violating posts have people report to them, and see how many of them are taken down. Other people are concerned about social media overreach and taking down posts that aren't violating, you could do the exact same approach, make posts that aren't violating, maybe a bit boring and unclear and report to them, perhaps have reports similar posts on different sides of the political spectrum, so that if you're worried about political bias and see how many of them are taken down incorrectly and then you could, like, people have done experiments and there's anecdotal discussion of these sorts of issues but I don't think there has been any systematic approach to it, and I think that would be extraordinarily vulnerable, because right now a lot of people are talking past each other based on anecdotal evidence and when you have two billion users on a platform there would be anecdotal evidence for anything. Those are some really great suggestions for going beyond our DR's current research methods and approach, obviously, the kind of indicator based research on publicly available documents as far from the only research method out there. And our team is very much thinking about how we can expand our current arsenal of research tools, and I hope we can continue talking about this in the weeks and months to come Sophie. Now Sarah, you come from a different perspective of Sophie as working as an investor and an advocate, and of course one of the themes we highlighted in the scorecard is the growing role that investors are playing in tech accountability. Sarah, my question for you is what's the business case for investors, why, why do investors care about human rights in the tech sector, and what strategies can they use to hold companies accountable and what strategies have you specifically used to this end. Thank you Natalie. It's true that when you put these two terms, investors and human rights in the same sentence, the general public often raises a brow, because most people not really see investors as allies in the fight for human rights or democracy in general. Investors, because they are the owners of the companies in which they invest. They are in a unique position to push companies in certain directions. They can leverage their ownership and power such as their voting rights, for example, to do that and that's exactly what we do at share we help investors towards their assets in ways that contribute to positive social and environmental outcomes. And while it's true that the idea, you know that profit should be the only externality investors should look for when taking investment decision. This is very prevalent. Sorry, that was my point. There is still significant portion of investors, especially institutional, institutional investors that agree on the materiality of other types of externalities including social, social, and environmental outcomes and while pure investors would base that assessment on moral values and ethics. Most of investors believe that social or environmental impacts for print interest for the companies, and sometimes the economies and societies, and therefore this risk should be managed. So let's take the example of human rights in the tech sector. I must say that it is a fairly new area for most investors and we see as this risk as emerging. And there is a growing understanding that we need to pay attention to the way some companies, because a very high influenced society may impact human rights and democracies like the platform for instance, or is that also the case for companies that rely on the collection and exploitation of personal data including facial recognition, like Google for example. So, I can take two examples to illustrate what investors can do to support the fight for human rights in the tech sector. The first example is about Meta platforms. It is clear that this company has a human rights problem and there are existing human rights risk and probably new risk to come with the development of the Metaverse for instance. And that's the reason why we co-filed a shareholder proposal with other investors, including Adrenal Capital asking the company to conduct a human rights impact assessment on the Metaverse. So this proposal will be voted at the next HEM and basically the rule is that if a majority of investors vote in favor of this proposal, good practice is that the company should implement the proposal. Now, you know, Meta platform is a bit strange, because, you know, as investors, we believe that this risk or human rights risk are amplified by the company's tractor that concentrates most of the power into Moxley Book's hands because it has a double function of CEO and chairman. And this means that there is no real chicken balance within the company and this is essentially every company to ensure that the management takes appropriate decisions and that the board serves the best interest of shareholders. In Meta platforms case, shareholders' voice is not heard. The management and the board have failed at many, many, many occasions to address shareholders' concerns, especially on human rights and governance matters, including when shareholders have, you know, in maturity, but it's for some shareholders' proposals. So we, two months ago, approximately, we convened a group of 15 investors, including a share, that collectively represent 2.7 trillion of asset management and we worked to the company and we asked that they implement certain governance reform that would strengthen shareholders' rights and to not to nominate Peggy Alford and Mark Anderson's board members and nominate two truly independent directors instead. So the company ignored our calls for, so the next logical step for us was to recommend every shareholders to vote against those two directors to send a clear signal to the board and the management that's when it changed and that change needs to happen now. So Meta platform HM will be at the end of the month, so we'll take the result of this vote. Usually we consider that this kind of vote is good when more than 10 to 15% of shareholders voted against directors. I have another example with Google that I'm not sure I have time to do that. Do I have time? Yeah, I do. Okay, so I like to use the other example of Alphabet. So with the support of the ranking digital team, we designed and filed a shareholder proposal asking the company to conduct what we call a human rights impact assessment to identify and address potential human rights risk. That would be created by Alphabet's new advertising system called Flock. The company can sell the implementation of the flock and decided to implement instead another advertising system called Topics API. We had a call with members of the leadership team of Alphabet and they said that they can sell the flock because of negative feedback as they received from civil society actors, experts and also investors think to this proposal. So this kind of, you know, proposals and communications between shareholders and companies really helped to amplify the voice of civil society actors. So we agreed to withdraw the proposal and in exchange, the company agrees to commit to meet with us twice between now and October, and to include in those conversations members of the ranking digital team. And we hope that with the presence of, you know, members of, I mean, this expert that, you know, will be able to, to move the needle. So I think that what we're doing with meta platforms or even Alphabet really illustrates well some tools we have as investors to move the needle and support civil society organizations to push for better human rights in the tech sector. And I think that our impact is meaningful, but modest, but I believe that in this circumstances, all hands on deck is a necessary approach and investors should play their part. Thank you Sarah and I want to remind the audience that we welcome your questions, you can submit them using Slido which is the box located to the right of the video. I'm looking forward to, to your questions. And I am going to start with an audience question for Sophie. Sophie, why do you think America focused more on the the whistleblowing and from Francis Haugen, Haugen on what, which she found versus what what you identified, given that you know you, you blew the whistle first why do you think her her whistleblowing had more more take up with with the public discourse in the US. I'm not a public relations expert so just personal speculation of course not an expert. My guess is that it's a combination of factors. First, first that Francis spoke to issues that spoke to issues that that were more broadly interesting and and intrigue into Americans such as for instance team team for instance team mental health crisis which I frankly I think that it's more relatable to most Americans that then abuse of Facebook by dictators in Honduras or Azerbaijan. Even even when they came forward about decisions made in the United States that was mostly that was mostly a side show which should not come get much pickup. The second, the second aspect point to is that frankly, I was probably pretty pretty naive when they came forward. I thought, I would just go out there to talk to everyone and they could, and they would this could decide on their own whether to listen to me or not. Francis took a more proactive approach of getting of getting PR support and etc which frankly was a lot more effective than what I then did, which is why PR people get paid in the first place, I suppose. I mean, right now it's a bit too late for me and being is it and and being essentially unscripted and scripted and doing everything myself is essentially my brand now so this is I'm running with it. I do find it a bit funny how how some people how some people criticize Francis for being too prepared and and poised and and scripted and then they turn around and look at me and say, you can't trust her she stutters she has an accent, she's not prepared enough. I mean, ultimate, ultimately, some people will criticize the messenger when what they dislike message itself. Yeah, I think I think that's absolutely on point. So in her in her presentation Jessica talked about a lot of, you know, the big dramatic events of the year that that that that implicate big tech accountability, and, you know, we're, we probably don't have time to cover all of them today but one thing I want to make really do talk about is is Russia's invasion of Ukraine and obviously big tech is not responsible for for for for Putin's regime and and you know the long history of Russian imperialism that's that's not something we're going to pin on on big tech platforms but that they are they are nevertheless implicated in how this conflict is is playing out and how this invasion and brutal occupation is pay playing out and and kasha you're you're I know you're you're you're quite close to the situation being in Poland and being a little bit active in, in your Eastern European activist networks. What, what can we learn today, what can we learn about how big tech and tech companies operate today from their recent actions in Ukraine but also in Belarus, Russia, and the broader region and how should that, how should that, how should that influence our advocate or collective advocacy agenda. I would say that in terms of our agenda the civic society agenda, including certainly what ranking the rights has been saying for for ages, we do not need to correct anything we have been saying this from the very beginning of that conversation and the business model is the problem and business model needs to change. The problem is a policy makers, even when they say they are ready to regulate as they have said in the EU, and they have declared war against big techs abuses. They're still not exactly ready to attack the core of the business model, which is based on people's engagement is based on exploiting users attention is based on making money from observation for behavior observations. If we don't attack that we will not change the machine behind disinformation or or war on information that has escalated nowadays in my part of Europe. So to our, it wasn't good news for our movement when we have seen that in the in the first weeks of war, everybody, including government has basically targeted big tech as the solution, asking them to clean certain disinformation agents from the internet to block certain accounts to block certain people or or Russian agencies from speaking publicly, as if it was a way to solve the problem, while we all know that the solution is much much deeper in the engine of this platform so as we go into that problem I can only quickly indicate what we are hoping for in the DSA that might prove to some extent useful in solving that problem but not radical enough. First thing, which is also very interesting in the context of what has been said today. We have much more robust risk assessment mechanism in the DSA, meaning that platforms themselves will be expected by the regulator to self assess risks, caused by their business model, including the way they target us including social media systems and impact of this algorithms, and their moderation practices and their targeting mechanisms on democracy, public health, cybersecurity, everything that matters. If they do this risk assessments right, we will no longer need whistle blowing. Obviously, it's just joke. I know they will not do that well enough because they have no interest to do that well enough, but at the same time we have European Commission invested in enforcement measures, able to force a better risk assessment and more interestingly for us here, we have new rights for civil society and other independent experts including so hope that it's researchers to demand access to data about all these mechanisms that are right inside of large platforms. So, hopefully we will be able to question risk assessments when they are not done properly, and demand real data about how, for example, social media recommender systems or targeting algorithms operate what type of data they take into account what type of organization targets, you know, the big tech users and all that so hopefully this is a foot in the door for us in Europe and hopefully globally as well to demand more accountability. Finally, again, not radical enough but interesting measure there will be limitations on how big tech can target people. In Europe, we wanted to prohibit essentially the use of observed data about humans because we believe that hardly ever people would authorize behavioral observations to be used against them to manipulate them with the use of sponsored content in general. Fortunately that proved to radical in the debate we had in Brussels but what we want is partial ban on the use of sensitive data, including observed sensitive data and any data about children. So, again, not radical far from what we wanted, but a foot in the door of changing the most toxic aspects of that business model. So first this the, you're referring to a ban on surveillance advertising which is something that that I know that accountable tech and and ranking digital rights, both support. Now here's a question from the audience and I think either either Chris or Jesse could take it. But would you have a sense of how American lawmakers are viewing the deeper European reforms that Kasha was just talking about. And what would it take to get to get us lawmakers to to move in that direction here. Question for either of you and if the other one wants to build on what the first one says please go for it. I'm happy to happy to jump in first and then you can say something more eloquent after me Chris. But I think we're trying to do some education around the DSA and DMA right now because I think frankly, a lot of lawmakers reaction in the US to the DSA and DMA is not really knowing what it is. And I think Chris alluded earlier to sort of like the gut reaction reflexive opposition that I think we still sort of have here in the US when especially you know regulation is sort of interesting here as it is but certainly when you know the Europeans are regulating our great American companies I think there's sort of an antiquated sentiment from Washington that it's their role to jump to the defense of, you know, big text bottom line. But I think one of the interesting things and we've put together I'm happy to circulate this to the community afterwards but we put together actually a memo that really runs through. What I find most interesting is that the DSA and DMA really to me it reads like an omnibus package of some of the best pieces of legislation that are before Congress. So today the Senate is marking up the platform accountability and transparency act, which enshrines a lot of, you know what enshrine a lot of similar transparency mechanisms that are included in the Digital Services Act. And that's a bipartisan bill from Senator Portman is supporting along with Senator Coons and Senator Klobuchar. There is, you know, risk assessments are sort of an independent auditing are sort of central to the bipartisan kids online safety act that that Senator Blumenthal and Blackburn have introduced, and you know the DMA, you know, I won't run through the DMA here, but the DMA shares a lot of qualities with the antitrust bill that Chris alluded to earlier, which takes direct aim at self-preferencing and other anti competitive abuses in the digital market. You know, I wish that we were where we were further along and that we had more of a, as Chris said a framework where the sort of a sweeping, you know, all of the above approach that really takes a comprehensive look at digital markets and how we need to rewrite the rules. But the other point that I make to folks on the Hill is that we don't make the rules, the rules are going to get rewritten without us. So, you know, I think I hope that it is, if nothing else, a major impetus for Congress to get their, get their act together and finally push some legislation across the finish line after years of talking about it. That was well put, Jesse. I'll just add that for better or for worse, we may need to, we may need to advocate for and get our policy makers in Washington to start to build on the studies that have been done in Europe that have been, that we've had, you know, an investigation in the House of Representatives here in Washington, and that was excellent looking at some of the harms, purely competition harms around tech accountability. But there's much more work that we need to do to build on that to look beyond competition harms to actual consumer harms and, and other, another threat so, you know, I'm encouraged, you know, when I say, so before I'm not encouraging the long run, the hope is that as, as policy makers learn the details of what's happening in Europe, that they can see that many of the harms that they're concerned about with, with the tech sector are being looked at and that they'll hopefully find interest in finding, you know, American style approaches to addressing those challenges, you know, we're already seeing the Federal Trade Commission, for example, starting proceeding to look at surveillance advertising, and whether or it could be a ban, should be a ban, something, something, you know, short of a ban. You know, these sorts of analyses and studies are important, this is why we've called for years to have an expert digital regulator for tech platforms, because we're just not seeing Congress keeping up with the pace of technology, technological change, and the changes in the marketplace and so while there, there is increasing interest. I would hope that you know the work at the FTC, or, or the empowerment of an expert regulator, I could go a long way to creating the sort of, of trust in the analysis of the marketplace that our policy makers in the US will trust, rather than feeling that this is somehow a threat from the European analysis to, to American companies. When we hear American legislators talk about digital harms they're often the same ones they're seeing Europe but then somehow this protectionism comes about and we just we just had to find our way around that. Natalie I think you're muted. I have a question. Just follow up to what Chris just said, would be really extremely helpful for the debate we have in Europe to, to gather more evidence from the industry of how alternative more ethical business models play out in practice. Right here in Europe that there is this Stockholm syndrome we observe, especially with electronic media, who for ages have been critical of what the big tech business model demands from them, driving the quality of journalism down and making a chronic media, more and more economic dependent on clickbait on the sensational emotional content you know everything we rightfully criticize especially in the times of information war. But at the same time, nobody seems to believe that the alternative economic alternative is viable that we could move to contextual ads, or profiling people based on their consent. As if there was no economic evidence to back these claims. It's very difficult for us civil society to come up to industry and say hey guys we know better we will now tell you how, how you do your business. So it's more likely that we just say what are the red lines on civil society side what are the safeguards what are the prohibitions that we want the business to observe. But it's not. It is ethical and correct when we say so but not not not not extremely efficient. If you want to convince policymakers to say yeah we are ready to, to execute the ban. So any reliable evidence coming from us backing that discussion against surveillance advertising would be extremely useful. Thank you for that. Another, another really serious I mean, all, all these news developments are very serious right otherwise we wouldn't be so concerned about them. And the development that's on my mind is is the news based on a on a Supreme Court leak a few days ago that the US Supreme Court appears poised to to overturn Roe versus Wade with really severe consequences not only for the right to abortion but for reproductive rights for a whole host of individual rights and liberties that that the court has has has recognized on the basis of the same right to privacy that that underpins Roe versus Wade and there as with all questions of rights there are implications to for for big tech and big tech accountability and unfortunately this is another area where where where we can look to Europe for for lessons learned and and experience and cut as you know I know that you and your organization have done a great deal of work around reproductive rights and and the right to information and privacy online in that context. What, what advice do you have or what lessons learned can you share with with with American civil society groups and and and and individuals in this context as we contemplate the possibility of Roe being overturned. Well, I guess it all starts with informing the society of what is really at stake and preventing the debate from lending in extremes. The worst possible result which we unfortunately observe in Poland is that both sides of the debate are using more and more radical arguments and it's less and less evidence base or or more simply more emotional. The same problem will be observing in the elections in in the context of conflicts like like Korean war, the same, so to say lack of possibility of meeting somewhere in the rational place to solve real problems. This is particularly troubling. I would say being very liberal in myself when it comes to reproductive rights, I have to admit that there are usually societal problems hidden behind the other arguments right the other argumentation doesn't exist in the society if there was no problem. So it's not just spin that we are that we have to face from the other side. There are usually problems we need to understand, while there is so little space in the debate for the two sides of the debate to meet and and have honest conversations so lack of the space starting with social media, ending with the two sides. I think this is the problem that needs to be tackled by city society because we are the only ones who can create a forum for a more rational less emotional debate about very complex civil challenges. Thank you, Kasia. So what another topic that another hot topic in the past couple weeks is of course Elon Musk's planned acquisition of Twitter. From an investor perspective, what's your reaction to that like what what can we keep, what does it look like keeping a privately held Twitter accountable. Well, first of all, the situation with Twitter and Elon Musk is very concerning from the human rights standpoint, and Musk has clearly stated his intention to limit content for the race as much as possible. In the name of free speech, and this is very dangerous. We know that. You know what happens when people can say whatever they want without safeguards and this interpretation of free speech can lead to an increase of hate speech disinformation and this would have a direct impact on public opinion and democracy in general, especially in the current circumstances will leave in ways the rise of extremisms and division. Now, the offer has been made in Twitter accepted it we should expect several things so the first one is regulators review of the transaction. But it is usually limited to competition and antitrust issues, which are unlikely at states in this case. And the second thing is shareholders approval of a transaction which would take the form of a vote. We thought that we would have this vote at the upcoming a GM in May 25 but it doesn't seem like it. So shareholders have the power to influence to some extent this transaction. In their evaluation they will, of course, take into account important financial considerations, but also other non financial considerations as Musk take over will likely have an important impact on the future of the company. So it is crucial for shareholders to pay attention to, you know, Elon Musk plans for as a company and how they would impact me right. And if there will be no sufficient safeguards, it is very important for shareholders to oppose the takeover through their vote. Another thing to consider is that Elon Musk is considering taking the company private for three years to implement change without shareholders scrutiny. Some would argue that it would make this change is more efficient because there wouldn't be shareholders to analyze, challenge and approve or disapprove the company's plans. But we also could strongly argue that shareholders ability to take an active part in Twitter's transformation with help the company to not lose sight of human rights risk. So what we see here is an attempt to, you know, see very shareholders right as much as possible for Elon Musk to do whatever he wants with the company and then be, you know, have this fit that complete, and we would, it would be too late, you know, Thank you. Before we move to, to concluding remarks, I just want to see if any of you have any clarifying questions for for each other. Okay, so, so I like to, I wanted to give everybody a chance to kind of sum up kind of their their takeaways from, from this conversation. And neither, you know, kind of, you know, yeah, I like everybody to share what what you need from your allies right like this is a movement where we all have different roles to play different strikes different positionalities. And one thing that that that I'm hearing is that, you know, groups like like ranking digital rights that that really straddle the line between research and advocacy that that we need to do a better job surfacing the academic research and and other types of civil society research into into the public conversation, right, someone in in the audience highlighted that it's it's not entirely true that that anecdotal evidence that only anecdotal evidence exists of platform harms is true that that's what hits the news. And there, but and that there are academic publications free software for collecting evidence on various types of harms and the systematic matter. But it doesn't make it into the news it doesn't make it into policy conversation and I think that's something that that RDR as well as others can can do a better job of being the pipeline that gets that that knowledge from the academy to to the public policy conversation. So I'll go in in reverse alphabetical order for once. So starting with Sophie. What, what do you need what are either your takeaways from this conversation or something that you need from your allies in the movement to play your own role better. It could be helpful it's just increasing general understanding of the situation and the different dynamics of today, because there are a lot of different subjects that get dumped together and do the, and do the umbrella of digital rights or what relative or et cetera it includes everything, everything from user rights, transparency transparency on terms of service to privacy protections to to issues like hate speech and misinformation to issues like inauthentic accounts which is what I personally worked on, and many others. And often, oftentimes, when people think of invite me to, to panels or presentations or talks, they have completely the wrong idea of what I have what I work on. And they, and they gave me a prompt of for for on something like based on your expertise working on artificial intelligence and I'm like no I did not work on artificial intelligence at all. That's I know, or et cetera or misinformation or hate speech or et cetera like these like, like you have to understand a problem before you can solve it in the first place and there are a lot of different problems that I put together and do the same with my brother currently that are actually in many ways very different problems that have that have different solutions. Many people have suggested breaking up Facebook or when social media companies that this is a problem that is a solution that solves exactly one problem, which is social media companies are too powerful others, it doesn't do anything to address others. So, just, but your understanding is my, my, it's my conclusion that's what's needed. Right. And to clarify, I think what you meant was that breaking up meta would not is not a silver bold it would solve the problem of too much power but we would still have many other problems that we would need to use different solutions to address. Great. Katarzyna, what's your takeaway, or your, or your ask. Thank you for super interesting debate. I would say two things in terms of pursuing our mission and having more evidence say what we want to say to policymakers, never enough evidence of social harm, more than individual harms individual harms is super difficult to document and also not very convincing in the times where people get killed. And, and, and we have huge storm coming up also here in Europe, maybe social harms are the only ones that can speak to policy makers so more documentation of that. We've been preparing for one project with global witness documenting how news feed on on Facebook the way it is moderated pushes this information more, you know, up the feed. Simple thing but again we need to keep documenting that so the more sources the more evidence proving this issues connected to how social media work with your engines will be extremely useful. Another terrain where we need more evidence is proving that alternative internet alternative business models are possible. So everything that can prove our concepts that that's something else something more healthy, something more sustainable more privacy, preserving is possible exists somewhere, and is also economically viable would be incredible. So sticking to breaking up Facebook will also have been against that claim for a long time. We tried to push for a more modular separation, or separating layers of something like Facebook to enable competition within each and every layer including algorithms and interfaces. I still believe it's an excellent idea, but people simply don't understand it. So whatever we can do, especially coming from the business side to prove or explain these concepts in practice would be incredibly helpful to push that debate beyond just complaining. Chris. There's so much work to do. Just to pick up work after left off to move beyond complaining I think is really important with the public. And so I agree with your point about making studies available, helping the public understand there are solutions out here I feel like a lot of the public feels powerless. In a world where they don't trust government right now. And the real options for who to empower, you know are limited inside of government companies or the public. And, and empowering one or or or having one, the other completely unempowered, I think leaves us with a power imbalance that exacerbates problems like disinformation. So we have a lot of work to do to help the public understand that they're, there's a role for for them. There's a role for the government hopefully democratic governance that can that can also empower them. And, and that there's a role for setting expectations on platforms to use their power in a way that meets public expectations so we have a lot of work to do to get folks talking together. And in those conversations are also hopefully in the US context going to help bridge some of the ideological divides that we have, because we simply have folks who, who are living in different information bubbles. And so to break through that is a challenge that civil society has to take on. Definitely. Jesse. I think just to continue building on what Chris was just saying I mean I think we all have have work to do in terms of continuing these dialogues outside of our own echo chambers not just on social media but you know we have a tendency and the advocacy world to talk to ourselves. And it might feel good or it might be a fun way to spend time sitting around debating these things with people who agree with us but it's not a good way to make progress and I think in particular, you know, especially being a straight white man like I've been in a lot of times where there's so many people especially on this issue, you know I know this is pervasive across society but especially on tech issues, you know where the whole room looks like me and we're sitting around talking about how to protect people from online voter suppression you know so and I think until we do the work to make sure that like everyone that we're doing the outreach, doing the education doing the doing the coalition building that that is really necessary to make progress, but not only to make progress but right because at the end of the day, the people that are bearing the brunt of all of the harms that we're talking about all the time are, they're not me right they're like communities of color they're the people of Honduras and Azerbaijan where they don't invest any resources their LGBTQ communities there, you know, and so we need to do a better job of getting outside of that sort of like tech policy bubble and figuring out how to both bring people to the table to make help to have their voice as we make those decisions and to communicate to the broader public as Chris was saying because it's going to take all of us to get to make progress and to make sure that that progress is equitable and advances the things that we care about deeply and not just, you know, more of the status quo. I couldn't agree more I think you know it's when when when I first started working in this in this field. About 10 years ago, there was very clearly like a there's there's human rights, and there's digital rights online. And, you know, there are technical tech issues and all other issues and you know that that line was always kind of flimsy and not entirely grounded Now it's completely gone and there are still people who think it's there and I think we need to really educate people that it's not about rights online. It's about rights, right, like the problem with with harmful speech is not that it exists on the internet right and I think that a lot of people, including in Congress act like you know the the the real problem is that there's images of child abuse on the internet. No, that's a manifestation of the real problem which is that children are being abused and you can extrapolate that to any issue that that we're concerned about here right and so I think it's it's as you're saying it's really important to break down these barriers and to communicate to work hand in hand with the reproductive rights movement with the environmental rights, voting rights movement, immigrants rights I mean the LGBT rights, I'm not going to try to list all the groups that that we're concerned about here because we'd be here all day. But I couldn't agree more. Sarah as an investor what what kind of help can, can civil society groups or whistleblowers or other types of actors in our movement, due to help you and others like you hold companies accountable as using your power as investors. Yeah, sure. So, as I said I think investors have a lot of power, because they, they can directly influence influence companies behavior and decisions, but we wouldn't be able to do that appropriately without the help, the support of civil society and academics, we're not experts, we cannot be experts in anything. And we're here to listen to understand and to facilitate that conversation between the civil society actors and economic actors. And the ranking digital team has been instrumental in the filing on several shareholders proposal this year on civil issues and thanks to that, thanks to that sort of dynamic we were able to bring to, you know, companies to implement and board some human rights issues that I think would have taken more time for them to, you know, to to to address. So I guess this is just my way to say thank you and let's keep going that conversation. Well you're most welcome Sarah you you and all the other investors have been a really tremendous partners for us, especially over over the past year. I'm conscious of time and I want to be respectful everyone's time so I want to to end by by thanking all of our amazing panelists. It's always been a joy every time I've spoken to each of you individually and having you all together as a group has been a real treat. Thank you again to do our funders for making our work possible. The biggest thanks of all to the ranking digital rights team who've been doing all the hard work for the past year and before even before that. Figuring out the methodology doing the data collection figuring out the insights keeping our tech working. All of all of the work that that goes into into producing the research that Jessica that Jessica shared today. This is just the beginning of the conversation, including with with those of you in the audience and I look forward to continuing in the weeks and months to come. Thanks.