 Hi everyone. Thank you so much for joining. My name is Lauren Sarkeesian and I'm a Senior Policy Council at OTI. Over the past year, OTI has published a series of reports, which look at how internet platforms use algorithmic decision-making for a range of purposes, including content moderation, the ranking of content and news feed and search results, ad targeting and delivery, and making recommendations to users. Today, I'm very excited to be joined by a great line of panelists who will discuss the subject of our second and fourth reports, how we can promote greater fairness, accountability, and transparency around the use of algorithmic decision-making in ranking and recommendation systems. We'll have some time at the end for our panelists to answer questions from the audience. So if you have any questions, as our events team just announced, please use the Q&A function on Zoom, and we'll do our best to get to them towards the end. So now I'd like to take a moment to briefly introduce our panelists. First, Daphne Keller is the Director of Platform Regulation at Stanford's Cyber Policy Center. Her work focuses on legal protections for users' free expression rights, particularly through platform content moderation policies and the use of algorithmic ranking and recommendation systems. I'll make a note now that Daphne will unfortunately have to leave us a little bit early, but we will do our best to get questions her way before she takes off. Heather West is the Head of America's Policy at Mozilla, where she works with stakeholders and policymakers in D.C., as well as global product and policy teams. Lisa Hayes is the Director of Tech Policy and a Senior Counsel at TikTok, where she focuses on ensuring the company's policies respect the rights of the user and maximize joy on the platform. Finally, Speende Singh is a Policy Analyst here at New America's Open Technology Institute, where she leads OTI's work related to algorithmic fairness, accountability, and transparency and content moderation. Okay, so let's dive right in with some questions. Daphne, algorithmic ranking and recommendation systems are designed to respond to and optimize user behavior. Could you set the stage for us a little bit and discuss what the perceived benefits of using these systems are and what their downfalls are? Sure, and maybe I'll preview that just by mentioning another job I used to have, which is that I was the Associate General Counsel for Web Search at Google. So I'm somebody who's spent a lot of hours in meetings about ranking systems. So as to what ranking systems are trying to do, it's a little bit hard to generalize because the goals of, say, an Amazon might be different than the goals of a TikTok or Reddit. But very broadly, the goal is to show you what you want to see. And it's easiest to say what that means if you're talking about search results, whether, you know, on web search or within a particular platform, because they're the ranking to give users what's relevant is the value proposition, to give them what they were looking for. It's a little harder to say what that means if what you're talking about is a newsfeed on a product like Twitter or Facebook. So there's this idea that if only posts were in chronological order, then they would be authentic or natural and that intervening to rank the posts is inauthentic. I think that's a little bit of a myth just because the state of nature on the internet is that there are always other users trying to game the system for spam or for scammy promotions or phishing. And so any platform that people actually want to use is already intervening with humans or algorithms and what you're seeing isn't, you know, sort of an authentic state of nature. In terms of what the goal is, I'll talk about three goals and then I'll stop technical goals, economic goals and societal goals. So the technical goals of engineers who build ranking algorithms, you'll hear them talk about things like quality or authoritativeness in content or in results. And those are both really important and legitimate goals. But also, you know, those words kind of mark the moment when some human values turn into math. You know, it's very hard to get away from the fact that there are human values underlying any concept of what is quality, what is what is authoritative, etc. On the economic goals, there's a sort of simplified story that's popular right now about how the goal is to maximize ad revenue. And so the algorithm will optimize like directly to your lizard brain or your id to give you the thing that's going to keep you mindlessly clicking. That's not wrong. And if you are at a point in understanding these things where you're just learning about ads, it's a useful shorthand. I don't think it's, you know, it's not a this explains everything framing, and it kind of gets sold as that a lot these days. It's a piece of the puzzle, but descriptively it can't describe everything. Otherwise, there's a lot of data out there that wouldn't make sense. And normatively, it's a little bit like saying, you know, platforms should stop giving users what their behavior indicates they want, because users want the wrong thing, you know, platforms should be making them eat their veggies or making them eat their kale, which is fine. But then you need a trusted source or a process to define what is what are the veggies, you know, and who decides and and what are the values that animate that. And that gets us to the last thing I'll talk about as we think about search ranking, which is that are not search ranking any right any algorithmic ranking, which is, you know, what are the societal goals if we were talking about regulating, which in the EU they are the P2B regulation is coming into effect this month that is a regulation of transparency around algorithmic ranking among other things. If we are talking about societal goals or normative goals or legislation. What are those goals. And what you'll see is, you know, this question is like a proxy for every other fight about values in society. There are very important policy sources like the EU could have conduct on disinformation which say the goal is for the content to be authoritative and not be fake news. So you'll see equally important arguments that the goal is to maximize diversity of opinion and sources. You can't serve both of those goals at once, and then you add that maybe the goal is actually to avoid discrimination racial discrimination. So that's an important goal to or the goal is to be pro competitive. And until we sort of are on the same page about which of these goals we're trying to serve it's hard to get down into the nuts and bolts of what a good algorithm looks like. I'll stop there. Thanks for laying out a lot of the issues for us here at the outset, including our lizard brains. We take in these things. Next over to you, Heather. Can you talk about how Mozilla thinks about providing fairness accountability and transparency around the use of algorithmic ranking and recommendation systems. Sure. I think, you know, most of you will know Mozilla for Firefox, the browser that we make but we're also incredibly invested in how to move the internet forward towards the world we want to see. And we've become very focused on what we call trustworthy AI. And a big piece of that is recommendation engines and ranking systems, because they show up in so many pieces of our lives, often kind of invisibly. We don't think about the fact when I talk to my smart speaker it's actually a recommendation system. It's making a guess as to what I'm saying and then making a guess as to how to respond. Or a search engine or some of the governmental functions that have been in the news around bail or maybe it's your insurance determination. So, so there's all of these contexts in which recommendation engines have become incredibly important, and they are incredibly useful there. There's a reason that developers came up with this idea and said, aha, I can do something with this. And in fact, a lot of the time it works really really well. But the thing that's really coming into focus I think Daphne said it well is we're starting to think about what metrics we're using to optimize those algorithms. And that's where I think as an industry, we have realized we did it a little bit wrong, because figuring out how to balance those signals and balance those values is really important. And at the end of the day, I don't think anyone anymore is saying, I just want to surface the content that's going to get a click, because a lot of that isn't great content it's not what you want your platform to be known for. But we're also looking at some of the bias and discrimination questions. What does it mean to have a fair system when when it really is something that we just as humans drummed up and built. How do we talk about all of the different elements of that system in such a way that we can thoughtfully design the bail system that is actually one that helps people make good decisions instead of ones that reflect existing bias. How do we decide what to optimize on a search engine, when it's when it's a bunch of links, or when it's just that one verbal response from my smart speaker, there's a huge amount in here that is very thought provoking. And so obviously this is this is timely. I'm not sure that we're going to solve it anytime soon, but I'm really glad that there's this conversation happening. When we're thinking about fairness, accountability and transparency. It is, as I said, kind of in all parts of the development cycle. What are you trying to build. How are you training your system and is that good data that you're using is it bias data. Have you really explored that is the output what you expected. You could use the most perfect data set in the world and have something unexpected happen with it. And you have the most perfect algorithm in the world and have something unexpected happen with it. So it's in, I think it's incredibly important that we're also increasingly recognizing that algorithms aren't perfect. They need a critical eye for for lack of a better term, so that we can sit down and understand what we're doing. And how it's impacting people and how we can make it better rather than sitting down and saying well it's just data can't be wrong. Thanks, I hope those are some of the questions will be digging into a little bit more later. So Lisa, I'm turning to you. So tick tock is a fairly new tech company in the scheme of things. And most recently, I know you all published information about how the platforms recommendation systems work and what the associated challenges are. This is really valuable information and information that really only a handful of platforms published right now. What was the thinking behind that move. And how does it reflect on tick talks thinking about fairness accountability and transparency. Thanks Lauren and thank you also to OTI generally for convening I think what's a really important conversation in this ongoing dialogue you're having. So for those of you who are over the age of 30. Let me start by saying that tick tock is a short form video app. This range from Will Smith doing the wipe it down challenge to Iowa farmers showing off their fields and harvest to me posting random videos of my cats to my two followers. But as a fast growing startup we made a commitment last year to be more transparent about our policies and our practices because we want people to understand tick tock have trust in the platform and know how we operate. In this effort, we announced a number of ways that we are further strengthening our teams, our moderation policy, and our overall transparency efforts. So just by quick background, the first thing we announced was the creation of our content advisory council, which we launched earlier this year, and is made up of leaders with expertise and child safety, fullying racial bias and algorithms, hate speech, manipulated media, and the list goes on. So what we announced is how we would further increase transparency around our content moderation policies and the practices that we employ to protect our community, which often skews younger. So we have started releasing transparency reports and simultaneously engaging in deeper conversations with our outside stakeholders to learn how we can continue to improve the platform and improve those those public transparency reports. So let's all back to your original question. You know our decision to share more detailed thinking behind our for you feed is part of this larger commitment to trust and transparency. People generally access tick tock content in one of two ways, either you can choose to affirmatively follow certain creators, or you can visit the for you feed, which is where we recommend videos that you might be interested in. We published a blog about how the for you feed works in order to help users understand the algorithm. We wanted to counter some of the issues that all recommendation services are grappling with this touched on by Heather and Daphne, but we also wanted to share tips for how users can really personalized their experience on the platform, whether as a user or as a creator. You know there's so much speculation about how these algorithms work, and we wanted to make it really clear how each and every interaction you might have with tick tock will impact the algorithm and your experience. And we think that that both better empowers the creators to deliver interesting content and it's also infinitely better for the users to find content that's interesting to them. So I encourage everybody to take a look at that blog and if there are additional things that that we should be doing to improve that experience, we certainly want to hear them. We're committed to being industry leader in transparency and doing these public disclosures was just our first step of that journey. Thanks Lisa. Thank you Sandy next over to you. So, Spandi you published the two reports and actually all four reports that the birthed this series. But the two that we're talking about today. So how a range of platforms use ranking and recommendation systems, or algorithms I mean to say, to shape the content that users see. What are some areas that you think platforms are currently doing well on this end and where do you think they could do a bit better. So, like Daphne and Heather and Lisa mentioned, right now companies have designed their platform so that users do have some sort of an understanding of why, you know, they're seeing a particular post in their new speed or in search results and recommendations, and also some platforms have given their users a limited set of controls to sort of personalize their experiences and determine what kind of content they want to see and how their data is used in that personalization process. But as we highlight in the reports like companies can definitely do more. And they're pretty lengthy reports I'm going to try and distill the recommendations about what they can do better into three categories. First, I think that platforms really need to provide more information around the policies which guide how these algorithms are used. So for example, if your company is using an algorithm to downrank content in a newsfeed or to delist a search result, then you need to definitely let users know what the policies around that are when those case studies will be used because right now that usually happens in a very opaque environment. Second, platforms need to publish aggregate data which outlines the scope and scale of how these algorithms are used. So again, downranking is a good example. Platforms are increasingly using these as a way to moderate content. And so it's really important for us to be able to understand how often these are used and how these processes impact user speech. And then the third category is as platforms begin using these curation processes more and more, I think we're also going to have to see a space in which they start thinking through what remedy and redress looks like. So if you have your search and your page delisted, or you have your post downranked or you have your post omitted from our recommendation system. If that starts to happen more and more, what does you know what kind of accountability does a platform demonstrate and how can users appeal these decisions and advocate for their own rights. Thanks, Andy. So I think that kicks us off really well to start asking some questions to the entire panel. So there are a number of different stakeholders involved in the move to push platforms to provide this greater fairness accountability and transparency around algorithmic decision making in ranking and recommendation systems. I'd like to take a deeper look at exactly what this means to each of you. In particular, are there specific data points or pieces of information that come to mind. Are there specific groups who these efforts should really be aimed towards. Daphne I guess we'll start with you if you have thoughts. Sure. So I have like 10 answers that I, I'll hone in on just one or two of them. One that I think is very important in this space is to not only look at aggregate data that platforms are sometimes willing to provide or sometimes need to be hassled into providing the aggregate data is very important. We should know how many pieces of content Facebook thinks were disinformation or thinks were hate speech, and how many it took down or demoted and how often that got challenged and redressed that that's interesting data. But until researchers can see the actual content and the actual users who were affected, we can only take Facebook's word for that. And you know the same with Google or YouTube, but you know in any platform, unless researchers have a way of seeing the real content affected. We can't second guess the platforms we can't say we think this decision was made wrong, or wrongly. We can't say there's a pattern across this enforcement where certain users are on the wrong end of disparate impact from the enforcement of the policies. All of these really key questions depend on looking at at the actual content and not just data about the content. And we have a real world illustration for this in the Lumen database, which is where a small handful of platforms send takedown notices mostly about copyright. We posted at Harvard, and all of the really robust empirical research that we have about how notice and takedown operations work or fail to work what's good and bad about them comes comes from there because that's the kind of data that we need. Very high on my wish list if I could get this kind of better transparency about one thing it would be the contents of the gift CT database. It's a global Internet forum to counter terrorism database, which has, you know, hashes representing. I think now it's over 100,000 images and videos which have been identified as violent extremist and are being automatically suppressed from more than 13 platforms, but nobody knows what they are. And nobody knows what level of accuracy or error or bias is is in there and we won't until we can see the actual content. So they're they're real legal hurdles to be left and needles to be threaded to even make that possible but I think that should be very high on everyone's wish list. I think I have thoughts on specific information or data that comes to mind in this regard. Lisa, it looks like you're jumping in. Yeah, I'll just jump in with with one side note I am so grateful for for all of Daphne's comments. The one thing I would note is that not all platforms do the same thing. And this question sort of asks us to presume what standard should apply universally a platform that my child is using to access news and to do searches for an example as a 10 year old may be showing very different results than if I as an adult were to go on Google and to be searching for similar information. And it's quite possible that's appropriate given the audience for who that platform is. Some platforms, you can search for everything some platforms are very, very narrowly tailored. So we need to look at audience, we need to look at the intent and scope of the platform, and then take steps, sort of industry by industry to make sure that there is fairness and accountability and transparency but in a way that is appropriate for the person and the platform that you're talking to. I wish there was a one size fits all but there there just isn't in this space. Okay. I agree completely that that any transparency or accountability or fairness that we're talking about has to be contextual. Every system is different and the fact that that, you know, Netflix is recommending the great British British baking show is a very different context from does my job application make it in front of a recruiter. And those necessarily need very different, you know, examination. We've been thinking a lot about social media in particular and how to be transparent, especially when there isn't a lot of information out there from from companies. So like it's fantastic the tick tock is putting some some information out there and figuring out how to make it useful for people, because you can barrage people with this information and that's that's not helpful for anybody. With a very few exceptions. We've been thinking about, you know, how, how do you look at targeting algorithms, recommendation algorithms, whether that's for an ad, or for promoted content, or for, you know, what is considered organic content and what kind of information we need to get to journalists and researchers and engineers who are trying to learn from the mistakes of those who came before them to really help them make a better system and it's going to necessarily include more transparency than we've seen before. But figuring out how to do that in a way that is actually understandable. If you sit me down, and I have a technical background it's been forever since I coded, if you sit me down in front of the algorithm for any of these systems. That's not going to be useful for me that's not going to be useful for anybody. It's a very complicated system and you have to be talking about inputs process and outputs, and then what you're overarching process for thinking about that is so that we have some some trust in the system as we figure out how to fix the system. Yeah, and I would just build off of what Heather said there I think Lauren, like you said, transparency means a lot of different things to different people and I think, you know, we continuously push platforms to be more transparent but you know, sometimes, for example, Reddit publishes its open source code for its newsfeed ranking system online. And like that is a great method of transparency but like Heather said, you're not a technical person that really doesn't mean much to you and so I think that as platforms think through how to provide meaningful transparency and accountability around the use of these systems. It would probably be useful for them to look into tiered models where they consider, like what kind of transparency and what level of transparency they're providing for different groups such as researchers users policymakers and so on, and how they can best frame this this information so that it's digestible for the intended audience. Thanks everyone. As we as many of you discussed a little bit. In your in your answers, the use of algorithms for ranking and recommendation purposes can often perpetuate harmful outcomes biases and discrimination, often because the systems are really designed to emphasize engagement and prioritize optimization. How can these systems be made more, more fair and equitable and should they be prioritizing different values and principles. Definitely looks like you're ready. Well, I'm the one who unfortunately will have to leave early so I will I will seize, seize a moment to talk. I mean, I think that the concerns about algorithmic outputs that are seriously perpetuating in society kind of standalone as in a way a more tractable problem. You know, if the problem is that job listings are being shown to people in different ways because of their race or their gender, you know that are access to housing or to credit or to employment is being curtailed based on considerations like that and that algorithms are perpetuating that intentionally or unintentionally on the part of their designers. That's something that should not be too hard to figure out, you know, it should be possible for researchers to submit bulk queries or scrape the results on websites in order to identify when things like that are happening. They shouldn't be facing a threat of violating the CFAA or, you know, there shouldn't be scary criminal and civil laws that prevent researchers from gaining this information. We should be asking platforms to open up API's that let researchers submit bulk queries and see if there's a, you know, the results from a platform that thinks you're male or different than the same platform thinking you're female. Those, those seem to me like, like very low hanging fruit. But we don't have them yet, right? So let's keep talking about them a lot. Beyond that, I think, you know, we're at this political moment where there's going to be an opportunity to ask for better transparency really soon. We see, for example, in the PACT Act from Senators Schatz and Thune, there, there's a big transparency reporting obligation, you know, we're seeing it in European legislation. And I think it's really important for civil society and academics to put our heads together and figure out what it is we want to ask for in this moment. Because we can't ask for everything. It can't be, you know, all platforms shall disclose all things at all times. That's neither possible nor politically feasible. And so we need to have a conversation about, you know, values and priorities, what we want to ask about, about what data is most useful versus what would make a lot of work for platforms without actually being useful. So there's, there's a really concrete cost benefit analysis conversation to have. And I think OTI has done a great job kind of teeing this up and hopefully we will keep having it. Thanks. Anybody else want to jump in? Lisa. I guess once again, you know, talking a little bit about the fair and equitable. One of the things that we're really worried about at TikTok, with our more limited platform is having filter bubbles. Candidly over the last several months, I have been using TikTok primarily as a platform to make me laugh. As a result, TikTok knows I like cat videos. I like dog videos. Don't judge me until you have watched them. They are surprisingly funny. You know, I also like the mom rants about life in the time of COVID. But when I go to TikTok and I open up my feed, my for you feed is not just cat and dog videos. It's just a bunch of 40 something year old moms ranting the way that you can find me in my kitchen, not infrequently. You know, instead, TikTok intentionally intersperses my feed with a wide variety of topics and creators who I otherwise would never stumble across. You know, I watched a doctor demonstrate how to make a procedural mask fit more tightly. You know, I watched a comedian in France who had me laughing out loud at his stand up routine. And as a result of that I've started intentionally following dozens of accounts that I never would have discovered on my own had TikTok just continue to send me my dog and my cat videos. To that end, to talk a little bit about what Daphne touched on in terms of how there is a steady draw to our reptilian brain, if you will, but we are trying to always have more material for people who are interested in consuming more material. And if you are unhappy during this time of COVID you should be able to do whatever you want online for as many hours as you want as an adult. But as the mother of between, I'm particularly concerned about the emphasis on engagement that you asked about Lauren. You know, and I love TikTok but I don't want my 10 year old using the platform for five hours a day. And so as a company we have responded to those concerns by introducing family pairing, which lets parents remotely manage the types of content and the amount of time that kids can spend on the platform. But again, that's, that's putting the tools into the hands of the users and requires an engaged set of users who are being thoughtful about how they engage with the product. So I think the next step will be some media literacy to try and help ensure that all Americans and certainly younger Americans and content creators and consumers understand exactly what their options are and how to engage with all of these platforms in a media literate way. I'm not sure we could put all of the onus on the companies. I think companies need to make choice and decision available and then partner with our users to try and have the best positive experience for everyone. Just to build on that a little bit I agree and I think that that piece about literacy is really really important and the fact that companies are starting to talk about that balance. Do I want my social media feed to be a safe space or do I want a diversity of views and to what extent is the platform making that decision for me. I will admit there have been moments where I have locked it down my Facebook feed in particular because I just can't deal with it. And so it is a filter bubble. I made it a filter bubble. And in that moment that was absolutely the right thing for me. But that does change the way I interact with the platform. And I think that probably having a diversity of platforms where you some of those are your happy little filter bubble which is full of your cousin's kids, which is kind of Facebook. And then I will often go to Twitter for a completely different experience. I want to see more diversity of views and more discussion as long as it's kind of intelligent discussion but of course how do you weed that out. I do think that it's, it is great that companies are thinking about that balance in a kind of different way and thinking about what they want on their platform. And if TikTok can figure out that I like dog videos and not cat videos, that'd be fine. And Heather, we actually do give you the option to post on a video and swipe. So if you really don't like it, you can notify the algorithm that you really don't want to. I have not played around with that. I am also, I actually have. And I'm older than you, my friend. But we want to give you the information that you want to see while ensuring also that you're not totally in a filter bubble. And just to build on that I think when we talk about fairness and equity and you know reducing harmful outcomes and biases. So we've been thinking through what some of the methods and approaches are to this and some of the ones that we've been discussing include impact assessments. So, you know, impact assessments, especially human rights impact assessments are really valuable ways for companies to proactively understand whether their systems are working as intended. There are also algorithmic audits. And so in the reports we particularly recommend that companies have independent entities conduct algorithmic audits in order to potentially identify harmful outcomes and also make recommendations on how to mitigate discrimination and bias. And we particularly think it's important for companies to solicit or conduct these audits proactively but also in response to concerns that are surfaced by civil society organizations and community partners. We've talked about the concerns for children. You know, I have a little cousin on one time was on YouTube watching a video of Elsa and Anna and then suddenly I was like, Oh my God, what are you watching that doesn't look like something a seven year old should be watching. So I think users should be able to surface those concerns as well but it definitely needs to be a two way conversation. I think I would be remiss if I didn't give a quick plug for a project that the Mozilla Foundation is doing called YouTube regrets, which is surfacing exactly that kind of story. How, how did the user get from point A to point B when they were really excited to watch this video that they started with and then it was automatically surfacing. And it led somewhere else, whether that's radicalization or something gory or going from kind of an affirming video to a very negative video. And I think that finding those stories is a very important step, because if we if we move beyond, you know, one or two stories that hit the news and start saying, Okay, so there's a bunch of different ways that this manifests. How can we make it better? How can we make it more fair and more accountable and build the systems we really again want to be using and wants to enjoy. Thank you. So I have one more hopefully quick question and then we'll turn to some questions from the audience to everyone in our audience just a quick reminder. Thank you, Daphne. It looks like you have to sign off now. Thank you so much for having me. This has been great and I'm so sorry to have to go early. Thank you. So to our audience. If you have any questions, please drop them in the Q&A function on zoom still open. Okay, so a theme that I've heard from a number of you touch on was that it's hard to create sort of standards across platforms. Every platform is so different and unique. So, obviously around the world, policymakers are sort of grappling with that question right now and starting to think through how they can regulate the use of algorithmic systems. If you think there's room for policymaker action at all when it comes to greater transparency and accountability around these systems, obviously Daphne already spoke a little bit to, you know, some some language provided in the sender shots bill, but I know there's other efforts around that so welcome your thoughts. Daphne, do you maybe want to kick it off? Sure. So, I think in the US context, it's important to recognize that the First Amendment limits the extent to which the government can direct companies and what content to put on their platforms. But I think when it comes to transparency and accountability policymakers can enact rules, which encourage platforms and direct them to provide this transparency, therefore pushing them to be more accountable around how they use these systems. I also think when we talk about algorithmic systems would be remiss to not also talk about the need for comprehensive federal privacy legislation as an important and necessary step towards ensuring these systems are used fairly irresponsibly into ensuring that they are also privacy protected. I mean, from Tik Tok's perspective, we certainly recognize the role for regulation and we are happy to work with policymakers as they develop new laws in this year. We are also supportive of a national privacy law. But with all these things, you know, the devils in the details, so to speak. What what does the accountability that you mentioned look like, Lauren? And it feels like the internet is such a vast ecosystem as Heather so wisely touched on from the different types of platforms that are out there that I don't know that accountability to the New York Times for showing me in my feed, something very different than it's showing my husband in his daily feed. What does that mean and what are the actual harms and how do you quantify that? I mean, if this was easy, I think we would have already done it. You know, as a company where we're certainly eager and excited to be part of the conversation of thinking through ways that we can make sure that we are being transparent and that we are, that we are avoiding discrimination or any types of harmful content on the platform. But so far I haven't seen a proposal that hits the nail on the head for everyone. I think that actually is an important point. There's there's so much that we're still figuring out and that we don't know and we're all evolving in our own understanding of what fairness accountability transparency looks like that a huge part of what I want the government to know is really expanding their tech expertise, understanding where the problems are understanding what the barriers are for folks inside companies who really do want to do the right thing, or is it that they're they're just not tasked with it. And then sitting down and talking about the importance of access to data for for researchers for journalists from from these these platforms. And, you know, maybe we can we can bite off an easier to chew piece of it than trying to tackle the ecosystem. You know, we've been spending a lot of time thinking about transparency for political ads from platforms who who run those kinds of ads, and what kind of data we need to really understand the system. What kind of data journalists need to understand what's going on within that context. And I think that we can start doing that examination of all of these potentially very different algorithmic systems, and up level from there. And, and there I think policymakers can really take that framework and start working with that and hopefully come up with something helpful. Yeah, I will plus one Heather's comment on the need for increased tech savviness in the government, both in Congress and in the agencies it's just an increasing part of everyday life of all Americans and we need to have people who are doing regulations and laws who truly understand how the system works and can put realistic regulations that are actually achievable into place. Yeah, I know we at OTI very much agree with that. And they're constantly working on it and turning to some audience questions now then. We've got a couple that have come in here. One person asks, and, and Heather I'll give you a heads up this is probably a question for you. So this person says, I'm curious about the technological measures that might support reforming these companies and the algorithms they use. Basically, if users were in charge of their profiles across all platforms. We need to restore the kind of balance we need. There would, there would still be plenty of opportunities to make money, as there is for Internet access and video services for example, while protecting privacy security and inclusion. My question is, is anyone pursuing technological solutions, this one or any other, as a means of establishing an important foothold in the policy world. I'll start. I think, absolutely. It would be, it would be silly to think that these amazing engineering companies who are full of like top tier analytics tools aren't aiming them inside and aren't sitting down and saying, okay, where, where are the places that we can actually provide options that people will understand. Where are the pieces that we really can't surface because it's proprietary. Do we find that compromise and build the tools. I do think that there's probably less in terms of internal resources often placed on developing those tools you'll notice that a lot of user control tools start out a little shaky they're a little hard to understand you're not sure what they do. There's, you know, one toggle button but it says it does three things. But that's all part of this long evolution and it has to be a long evolution. I don't think people will be okay with, you know, one little tool and nothing happens. I certainly think that companies are including search companies looking very carefully at some of the user controls. And I will say that a lot of the companies who have spent time on this do have some pretty in depth tools and controls that you can use. So, you know, Google, your Google search, you know, profile or, you know, Facebook has some similar tools to say, ooh, I don't want that in my history. Amazon is a really useful one because I do not need a recommendation based on that vampire romance that I bought a friend. And because it's just not my thing. She loved it. And figuring out that kind of win-win tool like that, that helps me the user but also helps the company give me what I want. I think that that's a big piece of it. I also think that there's a significant amount of research in the big picture, talking about how do we apply existing techniques like disparate impact analysis to these fuzzer AI algorithmic automated decisions that we don't have as much context for. And we don't have as much information about as we do for a credit card eligibility decision. You know, yes, no, here's why I didn't give it to you. Saying I advertised this event to you and not that one is a is a harder thing to provide transparency into but I do think there's some energy inside companies to see how to make that happen. And there's certainly energy outside the company is to try to open the door a little bit and get some of that data and do that analysis externally too. And I do think that that analysis leads to positive change. I would just build off of how there's conversation about tools. When I was writing these reports, I'm digging into these platforms, honestly, like I discovered so many different types of user controls that as like a regular user had no idea existed and I think that as companies, you know, continue to iterate around how what these tools are and what controls user have users have. They also need to make sure that they're accessible, you know, they're you don't have to go through like six different drop down menus to be able to change what data is used, or for you to even understand what data is used and I definitely think that that's a room that a lot of companies can particularly grow in. Okay, thank you all. Moving on to another audience question here. Kyle in our audience says, when writing these AI scripts for censoring content, especially for children, how can we avoid biases when defining where that censorship begins and ends for tick tock I know a lot of LGBTQ content is censored and what does that play a role as well. I'm going to guess that one's targeted to me. At least I'll at least I'll start with it. I'm not sure I agree fully with the word sensor, but in terms of content moderation it's true tick tock does have community standards in terms of service that we do enforce across the platform. We have chosen to keep the platform a largely positive one that encourages creativity and joy. We find content that is inconsistent with that we take it down. For example, if we came across videos of 16 year olds drinking alcohol, we would take it down or engaging in self harm, we would take it down. There is the right to appeal those takedowns and then the takedowns go to human review and we'll get human review for for resolution. I'm not sure about the part of I think it was Kyle's question about the LGBTQ content being censored because that has not been my experience on the platform so I would love to learn more about that. We have partnered very closely with the LGBTQ community on Pride month and have worked with several creators to try to make sure that the content that is being displayed on the platform both meets terms of service and is being enjoyed across the platform. But I think a lot of platforms are going to have some sort of bias as it is laid out in their terms of service and content moderation standards because the platforms are attempting to do different things and to be different places you know you will find a very different set of terms of service on Wikipedia then you will from on Reddit and you will from on the New York Times comment section. And as long as they those policies are not discriminatory and are written down and are being enforced equitably. I think that's a huge step in the right direction from where we were 10 years ago, but but there's always room to continue improving and I would encourage. I would love to hear more about Kyle's question maybe offline to see if we can address it. Thank you. Okay, so what should multi stakeholder collaborations around promoting fairness accountability and transparency and ranking and recommendations as algorithms look like. How can industry civil society, government, and other entities better engage and are we currently missing out on key voices. I'm going to jump in there. I definitely think that we need to include civil rights groups in this conversation more especially as we start talking about how the systems can generate harmful and biased results. We've seen you know how add systems can generate biases in terms of employment and housing and so I think, including those voices are particularly central. And then, as companies think through like tiered transparency mechanisms I think greater collaboration between platforms civil society and researchers is also especially important to ensure that these systems are being established in a way that provides meaningful transparency and accountability. Well, echo I think that the most important beginning of that process is really bringing everyone into the room and it is that builds trust in and of itself to say okay yeah we want to hear what you say. And we're going to have a good conversation about it and then we're going to take that back, but then the folks on the outside have to see something happen with that. So there's, there's almost the possibility if everyone comes into the room and good faith to have this virtuous cycle where we all start working towards the same goal. But I think that that is is the key if we can figure out what shared goal we're all working towards it within this this broader context, whether it's for one company or for one piece of the fairness accountability transparency discussion and seeing if we can make progress. I think that that would that would go a long way. I know from our perspective we want to see data from companies about how they make these decisions. Thank you. Okay, I think maybe we'll move on to the next we have some more audience questions coming in. So, Sila in our audience asks, do you think some of the remedies to these problems can just come from market ventures. What is the public stand towards these issues. So I know we talked about some user controls, but I'm not sure if, if anybody has thoughts on other sort of market ventures that that might affect. We can be very confident that market pressures can affect companies actions and I think we've seen that with some companies in the space in recent weeks who have perhaps not sufficiently address some of the questions of diversity and equity that we were just touching on and have seen repercussions from the advertisers. So I think market pressures can certainly come to bear in terms of knowing good tools for the average individual to use I'm afraid I'm not familiar with any but perhaps spandi or Heather has some suggestions. In terms of, I would agree with Lisa about the sort of advertisers as a good example I think a number of our colleagues have also started thinking about the role investors can play in this and whether they can elicit some sort of change and ensuring that companies are using their tools responsibly and are demonstrating accountability. Yes, I definitely think there's room for that I think it's probably, you know, going to ramp up as this moment also ramps up and as we push companies to be more accountable. I think investors is a key piece of that so yes but I think that we can talk about, you know, company boards to there's there's so many different tiers of power, all interrelated within any of these platform companies that there's, there's barely one person calling all the shots and figuring out how to reach that person whether it's their their venture capitalist reaches out and says hey I'm really worried about my investment. Or whether a member of the board says hey I talked to this person I'm worried whether it's your senior leadership coming, coming to the, the CEO and saying, I don't think this is right there's a lot of places there and not necessarily for an individual user. But I think that the movements are building around these companies. I think that Facebook is the obvious example right now around, you know, advertisers, having increased pressure, you know, just individual users talking out about the things that make them uncomfortable. And in companies pay attention they're not necessarily going to say anything publicly, but it does make an impact, I think, and one that over time is is potentially a really strong impact, but it is it is not for the impatient that particular strategy. Thank you. Okay, moving to another audience question. Gene and asks if you if all if any of you have recommended resources for monitoring and identifying racial, ethnic, gender, age and other forms of bias. I guess in these systems, I'm assuming the question means. The OTI has put out some terrific reports that are written in plain English and easy to understand and follow along with recommendations both for companies and platforms but also for users to be able to identify issues with the platform or things that they might want to keep a bright eye to so I would I would turn them to OTI's resources. Thank you for that. Taking the shameless plugs, I don't have to shameless plug. Yeah, and I would also say that through our research there are a number of, you know, investigative journalists and researchers who also put out some really great work, a number of the case studies we cite for example are based on pro public or research. So I think it's just about finding people who are really invested in these sort of cases and case studies and sort of following the work that they do. I think it's also worth digging in into some of the academic research out there because there's some really interesting stuff very specifically focused on fairness accountability transparency. In fact, the first one I thought of was a group called fat ML, which is just fair, accountable, transparent machine learning. It's not the most ideal acronym but it works pretty well and they have some great papers especially you can look and see how people dug into the question and what their methodology was and just use that as a resource when you're looking at these platforms and thinking them through yourself. Okay, well thank you everyone. I think that might be a good note to end on and thank you for pointing back to Spandi's amazing resources of her reports which are available on OTI's website. But I'd like to thank all of our panelists here for their very thoughtful insights and for participating today and I'd also like to thank our audience for joining in and New America's events team who has helped facilitate this all. So thanks everyone for tuning in and have a great afternoon.