 Hello everyone. My name is Parama Dutta. I'm a product manager at Reddit and a product enthusiast and I'm here to talk about what we know of our great social media and how to make social media platforms safe for its users. So this topic is pretty complex and controversial. We have seen a lot of media coverage on the negative impact of social media on its users through different movies and series. I'm sure you guys have heard about the social dilemma which talks about the negative impact of people getting hooked on to these platforms like Twitter, Reddit, Facebook and then engaging through different users and how the engagement can sometimes be detrimental and negative. Thereafter, I think that the most recent shows that I've seen is one called 15 minutes of shame. It's on HBO. It talks about how one small incident can spiral into becoming a very real threat to a user's life. A simple comment, a simple post can then spiral into an imminent threat on someone's real life. If you haven't heard of these two references, I would encourage you guys to see it. It's definitely entertaining and calling as well. Moving on, what are some of these issues that we see on social media? There is a web of them. There is countless names and countless different types of aliases to the same problems. But just to highlight some of the big ones that we see which are very common across different platforms is hate speech. Hate speech is something where people are very, where the conversations are maybe racist, maybe promoting violence, things like that. If you look at this whole chart, it highlights some of the very big imminent problems that we see when users interact with each other in a social media platform. They can be very serious problems where people's safety can be an imminent threat to their safety and their personal lives. Sharing personal information or doxing is one where you can be very vulnerable to someone knocking at your door. People going through depression or going through personal problems might have self-inflicted issues like self-arms, beside thoughts, things like that. We have seen many examples of these problems come into reality and those news and coverages have happened over time. Misinformation, disinformation is the most latest and greatest of the problems that we are facing. So there's no one smart way of tackling all of these problems because they are all different. The variety of information that contributes or is associated to a problem is extensively different as well. So how do we build a platform which is protective of imminent threats such as these and how do we make sure that our users on the platform feel safe? We have seen a lot of coverage on TikTok being one of the products where children are subjected to pornography or sexualization and that has been a big issue. I don't think we want a future generation to be very, to deal with problems like these. It is a responsibility to make products or it's a responsibility to make technology that is safe for users to use. With that, I want to talk about how as a product manager you can build products that can tackle this variety of issues and this harmful space. So moving on to how to build a safe social media platform. So in a nutshell, there are many ways we can go about tackling or identifying these problems, tackling them and building safe products but I have kind of grouped them into four main categories. The first one is we need to listen to our users and what that means is that as people are engaging with each other, we need to allow them the ability to report something bad that they see on the platform report a bad content or a bad actor or a user. We need to have an ability to allow users to tell us what they are liking, what they are not liking. So if at all we build a product, we need to make sure that there is a way for users to reach out to us and give us feedback and we need to acknowledge that we are hearing them and listening to them. So building reporting is one way of building a product which allows you to listen to our users. The next one is sharing more control with users. We know that the more we allow our users to control and have the power off, the better the experience. So instead of us being the God in trying to curb problems, I think people know how to protect themselves better and that means we should build features and controls that users can use and have at their dispense to make their experience more tailored to the way that they like it. Things that you can do here are build features like the ability to block someone else or the ability to mute someone from a conversation if that person is being racist or is trying to spread propaganda of some sort. So allowing more control back to the user will definitely help them make tailor their experience in a way that they control and enjoy the experience on the platform. So the next one is in our end, what is it that we can do to supplement both of these is to make smart decisions or making a smart product and what that means is that we can tap into the latest and the greatest of machine learning technology to build predictive models, to build smart models that will proactively scan the platform and identify toxic content. And then if at all we have high confidence on that toxic content or toxic user, we can proactively remove them from the content as well. So this acts as a housekeeping element where we are constantly looking for and looking for bad actors and bad users and we are dependent and relying on our machines to make those decisions instead of relying completely on humans. What it helps us do is that it prevents exposure of that bad content to humans, something that is bad, it would be detrimental to the person who is looking at it or reviewing it as well. So instead of putting human moderators or human reviewers to list and look at what's reported by the users of the platform, we have built machines to kind of interact with that bad content. So it prevents a lot of mental overload, it saves the humans from the exposure of the bad content. So becoming smarter is super important to lean on it into technology and help use technology to make the platform smart. The last one is being transparent. Transparency is huge because it's important to acknowledge that you're listening to your users. So closing the loop of feedback by letting be better, it's a reporter or the violator know that their concerns have been heard and if at all they have violated been the bad actor knowing that they did something wrong and what they did wrong. If someone is being a behavioral bad user, then probably emitting them from the platform is the right way to go. So making those decisions and then being transparent and communicating those decisions with the users is super important to building brand reputation. Your product will be known for how good it serves its users and it will be known by other platforms as well. So these are the four main attributes that we should we need to consider when we are thinking about approaching to build a safe social media platform. So to speak about safety in general and to set up a safe platform is definitely not easy. The work that we would need to do is sometimes perceived against growth and that can be confusing. Our main drivers when we are growing a product, a social media product are growing its daily active or monthly active users. So if we are acting and the safety safety constraints will probably reduce those users which are bad which might lead to an overall decrease in the daily active monthly active users. So when we think about our growth factors, our growth variables, we need to make sure we are considering safety constraints. So we should be making sure that the user growth is of that is attributed to safe users and the content growth is attributed to that of safe content growth as well. So the next slide I want to talk about is how to set up your house or what is it that what are those components needed to do this on a daily basis. So as a product manager, you would want to focus on four main areas. One meeting the biggest is moderation and enforcement tools. You would want to think about very effective moderation and moderation tools where all of the stuff that's reported by the users of the platform can be effectively triaged or routed but not everything. We have to be smart in what we show to human reviewers or our B should be handled by automation. So things that are worthy of human review should be the ones that go into the moderation or enforcement tool and that definition of worthy would come from topics that are very nuanced and hard to make a binary judgment on directly. So which need a perspective and which need a human perspective of evaluation would be something that we would want to put in these tools to for a human to see for a moderator to see and make a decision on. The main components that these moderation and enforcement tool needs to suffice is the ability to very quickly make a decision with the least contact with the bad information, whether it's text, media, which is a video or an image. We know that media can have high shockiness to them. So we want to prevent and build features that prevent that or hide that level of shockiness from the moderators. For a text it can be very overloading and heavy. And for that we want to make sure we are doing smart features like highlighting to make these tools very effective for a view. So the first thing that you need in your house is moderation and enforcement tools and that is mainly to have something to use on a daily basis. The second and important tool set that you need is investigative tools. Sometimes problems are more larger than they seem and they can be very hard to identify unless we have a way of investigating users or the content. The second thing that we need is investigation tools and this is mainly to have an oversight of information or users or content that are hard to detect and are of a large scale. A good example would be when somebody is trying to set up a propaganda and that could be a big effort by a lot of users. So if any problem becomes bigger than a one user and it becomes a problem that a lot of users are contributing to, then we would want to have an ability to investigate or surface those content on users or problems at a higher level. The investigation tool allows you to do that and that's what we would want to have in our wheelhouse as well. The third one is self-serve portals and these are mainly for law enforcement bodies for third parties that we trust and allowing them the ability to engage with us and have them get the information needed in terms of threats or copyright intellectual propriety related information. We know we need to work with law enforcement agencies to make sure that we are really addressing the imminent threat or the personal real threat that users have and are helping them with the information they need to make sure that the users in that all are safe. The fourth one is around machine learning models and I kind of touched about this in our last slide as well. So the machine learning models is again very important to allow machines to do the dirty work and to build smart predictive models that will help identify how toxic a content or a user is and then get to that toxic content or user even before it is created or posted or is visible on the platform. So being as proactive as possible to allow our machines to not let any bad content or users or decrease the degree of visibility of bad content and users on the platform can be a very smart way to go about it. So we need to build smart machine learning models that do that and on the other side the bigger we increase the coverage of our automated efforts the less burden we still have on the humans for reviewers. So it helps us and protects our other innocent users as well. Now the last slide is what is the framework of building these products that I mentioned. So the product management cycle definitely needs a lot of different teams and factors and the main ones here are user research, UX design. I'll talk about the product management function and then engineering data science. So the user research team per se they are guiding factors. They help us understand what is the sentiment that the users of the platform have with the product. They help conduct user studies and they also do qualitative assessment helping us identify problem areas and quantifying those problem areas qualitatively and quantitatively. So user research is a very important function given that we don't directly get to listen to our customers. The user research function helps us do that, helps us build that and also where we get to understand what is our brand reputation. And then once the research highlights upcoming imminent not only imminent threats but what is the user sentiment with the product that they are using we can then identify opportunities to pursue and then build on. Listening to the users is very important and this ties to that. The second big component factor or function is UX design. Like I said those tools that we have which is the enforcement or moderation tools that are the investigative tools. They need to be built as in greater details as you build your platform which is visible to your users. Even though those are internal tools they should be built in a way that they are scalable, that they have good feature sets and functionalities and the UX design team kind of enables you to do that. So depending on this function is super important to having really well designed concepts whether it's low fidelity, high fidelity, having really good well designed prototypes of wireframes and then conducting usability tests with our users to see are the moderators able to use these platforms effectively? Are they able to investigate problem areas effectively? And then is the design in part with our regular platform as we have? The third and the main function which is what I'm here for is the product management function. And the list here is pretty long so it was very hard to list only a few main points. But overall the product management function obviously is to work with all of the other functions and making sure that there is a vision and a strategy to how the space will grow, how are we getting smart at tackling all of these problems, what is the tool's future set going to look like to be smart enough to be ahead of the problem. So in a very tactical way some of the things that are important here are apartment gathering, prioritization, of course building the roadmap and doing milestones timelines and the biggest one being cross functional engagement. The project management function is where you think smart about how you go about setting up safety constraints, how you work with other feature teams in making sure any new features are vetted out against safety constraints and that the growth is attributed to safe growth. So at the end of the day building a safe product it ties to a healthy product that will live forever. The fourth function is the system function I would say, the engineering function and this team is important to help allow bring things to reality which is figure out what is implementation of a tool set going to look like if we are trying to tackle a problem, how do we figure out what is the, do we need to build a machine learning model or do we need to build a tool set to handle it and then just figuring out the overall execution of it as well. The fifth and I think one of the most important one is data science and data science function is very, very, very important complementary function because it helps you understand the problem in a quantified way. So sizing, opportunity sizing is one of the main areas addressed and that means how big of a problem are we addressing, how big is misinformation or disinformation as seen on the platform and what is the impact of that problem. So when we go about tackling those problems, what is the success of what we have built and then how do we go about experimenting different solution sets. For example, a very good example I have is, let's say we want to be smart and what we show on our feed and for that you would want to maybe run an experiment in showing some content that is not so good and we down grind bad content and the other experiment and in the same experiment and we compare it with the other group of where we don't change anything in what is shown by the recommendation engine. And these kind of experimentation highlights that safety constraints do attribute to higher growth. If the user's feed has less toxic content then they are more likely to engage with each other in the content that they see on their feed. So experimentation is big and it helps us understand how to do things, what to do things and how to do them better. So with that I would like to conclude this presentation and I hope you guys got a little bit of a peek into the world of safety and how social media products can be made safe for its users. Again, if you want to reach out to me, my name is Parama Dutta. Thank you.