 Okay. Well, hello everyone. Thanks for coming to the sixth installment of our fall RSM speaker series. And we have two left in the semester. Next week, we'll be talking about the political effects of social media with Josh Tucker and Talia Stroud in the US 2020 election study. And we'll also be bringing in Ethan Zuckerman on the 13th talk with the future of social media research in light of all the API changes and and the trouble of accessing data from the outside of these platforms. But today we're here to talk about platforms practicing power with John Penny and Alexis shore and just a quick word of introduction. Alexis shore is a PhD candidate in the Department of Emerging Media Studies at Boston University. Her research draws on frameworks from law, media, psychology, and communication to explore the respective roles of platform design policy and individual preference differences on decision making related to privacy and trust. And John is a legal scholar and social scientist based at Osgoode Hall Law School, York University in Toronto. He's a faculty associate at BKC and also a former RSM visiting scholar. So we're really lucky to have them both here today and please help me give them a warm welcome. Thanks so much, everyone. It's, so I'm John, and it's my own colleague, Alexis. It's, it's great to be back at Harbors, Brooklyn, Collins Center. You know, it's a place that I really consider to be a home, a community, a family away from from home so it's, it's great to be back. And we're really excited about this opportunity to talk about this work that we've been doing on intimate privacy. I want to start with a few quick acknowledgments. First off, we wanted to acknowledge our colleague. Actually, she's the project lead on this research Danielle Citron, which are most of you in this room will know her leading scholar, a rule renowned scholar on privacy and online views and, of course, without her vision and innovative scholarship on intimate privacy this work might have been possible. Also, the research was made possible by a grant from the Knight Foundation, we're very thankful for. But also, the Institute for recruiting social media, which I was a visiting scholar at last year. And that committee of scholars and broader community at Brooklyn clients Center was really important also to this project and work both to studies that we had done and thinking through the results that we're going to be talking about today and to study that we put together design and have carried out sense. None of that would have been the same without our own. So today we're going to be talking about this concept of intimate privacy. And many of you are probably wondering, you know, what is intimate privacy, how is it different from more broader notions of privacy. And it's really been Danielle's work on this concept that provides us with a nice, clear working definition. Essentially intimate privacy concerns social norms, behaviors, expectations that manage the boundaries around our intimate lives. And also concerns the degree to which others have access to information about our bodies, intimate thoughts and desires, sensitive and intimate information about our health, sexuality. And our closest personal relationships. So that includes both sensitive and intimate information, but also in the context of the talk about this, the countries of intimate imagery media as well. So we think about intimate privacy thinking about the most sensitive most intimate information about you. That is what intimate privacy aims to, which is what it's concerned with, what it aims to protect. Now while Danielle's work has defined this in her book and scholarship more recently, it's actually been a concept that's been around with us for a long time. When we go back to Samuel Warren and Lewis Brandeis famous article, published in 1890 on the right to privacy, which essentially created the modern field of privacy law. That work often was centered on legal scholars and privacy scholars talk about the background behind that really transformative piece of work. Often the motivation is tied to Samuel Warren's one of the co-authors of the piece his wife, Mabel Bayard Warren. She was a member of high society in the Northeast and the press, who, which was the mass media and press was sort of emerging this time was very interested in her life and covering it. And so the idea often as described that the motivation for this article was a rake to privacy to that Samuel Warren was concerned with and so he enlisted his friend Lewis Brandeis to co-authors piece was about her and his private life. But the reality is one of the maybe more important motivations was Samuel Warren's brother, Ned Warren, and who at the time was just embracing his own homosexuality at a time when this was being stigmatized even criminalized in certain jurisdictions. Samuel Warren was aware of his brother's sexual orientation was also very concerned with protecting him and this intimate information about his life. So that's actually one of the key motivations of that piece. And if you go back and reread that article through that lens, you can see that's a key aim and purpose that's behind that article. So why does intimate privacy matters and I think some of our research that we've done will show you why it matters but generally speaking it's tied into key themes and key ideas of the center of privacy notions of autonomy development of self identity and mutual respect. It's impossible for us to self experiment to develop ourselves, our identities, our personas without protection of this intimate information without sharing it with our close personal others significant partners in a way that we know that it won't be disclosed to others without our consent. If it's not protected if we don't have that intimate space, then we won't have that freedom to develop our identities and our autonomy into choose our lives as we see fit. So therefore it's also tied into notions of self development, because the only way they're looking at significant relationships with our closest partners in our lives is that we can share this kind of a relationship. Relationships developed through mutual self disclosure, intimate privacy is core to that process and trust being at the center of that as well and intimate privacy contributes to it as well. So in this sense, it's a foundational privacy interest to others. Today, however, intimate privacy is under threat, unlike any other time in human history. The degree of this computing is now prying to every aspect of our lives intimate and otherwise spyware stock aware, its use proliferation is on the rise. A recent survey done by well aware bites of 1000 Canadians and North Americans just in October, found that 62% of people in USA actually monitor their partners activities online. And that surveillance of intimate partners continued offline as well, have a participants admitted to tracking their partners offline as well. So intimate privacy concerns online as well as offline. In our homes, smart appliances, the Internet of Things, these are all new realities were even places that were where we could engage in the most intimate activities and have our most intimate conversations with our significant others in the privacy of our homes is now not protected from the pride ears and potential eyes of third parties through these new technologies. And of course, on top of all of this, other more flagrant anti social behaviors also on the rise, like the disclosure of intimate imagery and information without consent, like revenge porn here is some, some data from the UK revenge porn helpline from 2023, wherein incidents are up 31% compared to last year, overall 13% up and a total reports from January, September this year, over 10,000 that's just in the UK not speaking obviously to the states elsewhere. And lastly is a point before I turn things over to my, my great colleague Alexis. The reality of intimate privacy violations is that they disproportionately impact on women and other minorities. Sexual minorities, visible minorities, all of these groups are disproportionately victimized by intimate privacy violations. Thank you, John. So I'm just kind of building on what John just described intimate privacy has always been something really important but today, especially in the over rolling of Roe v. Wade postdocs now, there's a new reality for intimate privacy where there's this new found and growing fear about data being accessed by unknown afters. And so acknowledging this importance of intimate privacy and this growing need to protect it, we noticed a gap in the literature and that there is not much of any empirical research that supports measures that could potentially protect intimate privacy and encourage people to express themselves intimately online without this fear of negative things happening to them. And so we're going to talk about two projects today. The first in which is about how intimate privacy measures, whether that be imparted by policymakers or by platforms, impacts intimate sharing expression and trust. And just to parse sharing and expression quickly before we dive into this, sharing being this more broad definition of intimate privacy that Danielle Citron proposes, including things like thoughts, desires, fantasies, not just these stereotypical thoughts we have about intimate imagery and stuff like that, sharing with a trusted partner. So broadly capturing those in our operationalization of intimate sharing and expression focusing more concretely on more socially available, public facing intimate expression that one may see on social media that people engage in all the time. And so two separate studies within this first project that capture both of these variables and seeing differences there. And so grounding these studies is expressive law theory, which essentially states that laws or policy creates this form of social proof that people follow and it provides guidance for what behaviors are socially acceptable and what should be condemned and what sorts of behaviors are more risky than others. So there's evidence that law can provide this expressive or salutary effect on speech or in this case intimate sharing in that if law were to signal that this violations of intimate privacy are not a good thing, people may feel more comfortable and be more likely to express themselves online. And my colleagues, John and Danielle have already found empirical evidence that where this comes into play such that cyber harassment law, for example, would impact social norms and potentially create and encourage victims of cyber harassment to speak and engage online. And so we wanted to understand how law both in policy and as proposed by platforms policy would influence or create this salutary effect on speech. And in addition to that what this study also wanted to do is understand how people appraise the risk and how they would cope with violations of intimate privacy. So this was grounded in a communication theory called connection motivation theory, which evaluates people's risk and people's coping appraisal to a particular situation and how they respond to that. So the variables here are being perceived severity. So understanding how severe of the risk of intimate engagement online, how vulnerable does it make me. And then going into the coping appraisal response efficacy being if someone were to invade my intimate privacy, would I be protected by a third party and self efficacy being if this was invaded, would I be able to have the tools to protect myself. So we wanted to understand how individuals perceive the risk and their ability to cope with intimate privacy violations, both before and after understanding that there might be a protected intimate privacy measure. And then in addition to this finally we wanted to incorporate partner trust as a critical variable that may also influence intimate expression online. So trust has been studied in combination with perfection motivation theory previously. So I mean naturally theoretically fit here and also just as Danielle has described in her book and other work that trust is really a critical component that is necessary to establish prior to expressing yourself intimately online. And so the first study we did here as I previously kind of described was a longitudinal study to see the impact of legal and platform based measures. So proposed by the governor proposed by the platform we wanted to see if there would be a difference there and how people responded in terms of their intimate expression. So we wanted to see what intimate privacy measures create a salutary effect on intimate sharing and how the perceptions of risk about this sort of sharing and partner trust change both after knowledge of intimate privacy measures. And then specifically as John mentioned there is this disproportionate impact on minority groups. So we wanted to understand how these questions apply specifically to victims of online abuse and women victims of online abuse. And just to further describe this legal intervention. We describe the participants that there would be like legal consequences to things like recording or sharing an imagery without consent holding social media companies accountable by the law. And then the platform intervention would be predominantly focused on what the platform would do. And we described we basically existing privacy measures that platforms have taken. So this was all grounded in real policy or platform interventions. And then study two was focusing on intimate expression. So whereas study one operationalized the sharing as more broad thoughts and fantasies with a partner and inclusive of a bunch of different things. This was more focused on intimate expression on social media and accessible to wider circles. And using the same sort of interventions wanting to see how this would affect that sort of expression and study two was not a longitudinal study. So it was just participants came to the experiment. They saw the intervention and we asked them these questions about how their expression would change. And so I'll pass to John to go over the key findings here. Great. So, you know, going back to thinking through some of the key gaps in the literature that Alexis mentioned, I mean more broadly speaking, there's very little work that's been done on intimate privacy. Both in terms of platform based measures, legal measures theorizing it. And certainly because it's been very little actual protective work being done by platforms both in law and in regulation. It's been very little study of the impacts of those. And that's really one of the key aims of the study to understand these impacts in a comparative sense. So, in study one summer key findings you can see the size of our representative us based sample. Here's some of the key findings what we found, interestingly, there was no difference in perceptions of risk or intimate sharing as a result of proposals by government legal measures the platform based measures, or a descriptor that combined. So the impacts we saw there was really no difference. When it came to the findings in terms of the intervention. Partner trust trust and response efficacy so in PMT theory response efficacy is a perception that a third party will take steps to address the threat. These turned out to be a key part of the story. That is, they predicted greater intimate sharing, especially post intervention, that is, after the intervention trust became a stronger predictive. So the predictive power for the variables was strongest and predictive for female victims of online abuse. When it came to study to hear this. A last really interesting finding. And this sort of tracks the broader concern that intimate privacy violations disproportionately impact on marginal communities and minority communities. We actually found that the interventions had a significant and impact post intervention on specific minority groups. So Hispanics, generally, but especially women, Asian South Pacific participants in African American prior victims of online abuse. So there was a real impact with the interventions having a salutary. So this is not about deterring intimate privacy violations. It's putting these protections in place and seeing how that impacts on the sharing of participants in study. And then two, which is Alexis nicely set out, it focused on intimate expression so more public facing expression. Also some interesting files. So compared to the government measures participants exposed to platform based measures had higher levels of levels of intimate online expression so interesting and study to when it came to intimate expression online platform measures, had a greater impact. Secondly, as we've seen through the say the trust is really important to trust in technology and government were positive predictors of intimate expression. Also tracking what we know about online abuse and intimate privacy violations and some of the social noise and standardization around it females were less significantly less likely to engage intimate online expression. And we also found some other interesting findings. So for example, participants who had a higher degree of experience with online abuse recorded higher levels of intimate expression. There's different ways that we've interpreted that finding one way to interpret that is there's some people who feel more free and less concerned with the risks of intimate online they become targets of online abuse and intimate privacy violations, but also at the same time, those who are engaging in it. Maybe they have a lower sense of risk and maybe some of the interventions by. And lastly, although this was a smaller sub sample within the broader population we found that non binary participants of participants who identified as non binary were significantly less likely to trust the partners across the different interventions. I think what that shows you again is how minority groups in this case a sexual minority is disproportionately impacted by the risks and threats centered on intimate privacy violations. Let me hand it back over to Alexis to talk about a second study and we'll talk a little bit about some of the key takeaways. Yeah, so thank you john for going over those key findings. So our second project specifically focused on period tracking applications which became a very popular topic of discussion, following the overturning of roadway and intimate privacy concerns that may correspond with that. So for those of you not familiar, this would be an example of a period tracking application. Basically what it does is allow women to track their menstrual cycles, whether that be simple things like knowing what date that it's going to come and end, or things like tracking emotional and physical symptoms that correspond with your cycle. Also some period tracking applications have a social component where you can engage with other women to understand your symptoms better. They have educational resources. All this to say, these tools are used by so many women. FAMTAC as an industry is projected projected to be about a $1.1 billion industry in 2024. This is a huge industry. These applications are super important to women to help understand their bodies over the course of the month. But so when roadway was overturned, abortion was banned in states across the country and corresponding with that people started talking about how the data collected by period tracking applications was going to work in a post roadway climate. What this data would be accessible to law enforcement, period tracking applications also often collect location data. So there was this concern about law enforcement knowing being able to connect location data and reproductive health data. There are articles galore about the leading period tracking applications. Researchers started studying period tracking application privacy policies, finding that the majority of them collect and share data with their parties. So there are all these concerns about period tracking applications, particularly post jobs. And so the central question of this research project was how can we build a period tracking app that respects intimate privacy, particularly post jobs? And as I mentioned, these apps are so important. So the answer to should I delete my period tracking application should not be yes, because this is really important to women. We need to understand how we can design these applications in a way that doesn't end up violating women's intimate privacy. And so meeting this kind of project, was it going to be policy that we should lead with? Is it trust as we found the previous study, or maybe is it aspects of the design? And so this research was kind of guided through literature on technological affordances. And so what affordances are not necessarily features of the design. So we're not talking about colors or buttons. We're talking about subjective perceptions about the possibilities for action within an application. So there are aspects of this application that make me feel like I can participate in it or behave in a certain way. And privacy scholars in particular have noted the importance of affordances as being critical to information management decisions. So noting that it's not just about personality traits or the like that influence people's behaviors, but it is affordances of the design. So particularly focusing on two affordances that we believe would allow people to feel safer and be able to disclose with a period tracking application. And an entity being like having no connection to one's personal identity and persistence being the data would not like persist for longer than needed to be deleted eventually. Both of these within the literature have been supported to support self-disclosure online. And so as we mentioned, trust is so important here, particularly post-ops and especially with period tracking applications. Without trust, the design might not matter if they don't trust that the application is going to keep their data safe. So we have literature to support that trust may actually mediate the relationship between these technological affordances like anonymity and persistence and the eventual disclosure with a period tracking application. And so what our experiment did was we created this hypothetical new period tracking application and the experiment manipulated different platform policy reforms. And so these, similar to the previous study, mimicked real policy. So one being a right to be forgotten, which would anonymize data, or sorry, it would lack persistence of data. The identification would anonymize data and the HIPAA rule mimicked something that would law enforcement would not be able to get access to data. You wouldn't be able to connect it to different third parties. And there was a control group as well within the experiment. We also, following the intervention, we also quantitatively measured perceptions of affordances and other sorts of control variables such as privacy risk, previous disclosure habits, previous use of period tracking application. And then finally, we wanted to not only understand intended intimate disclosure with a period tracking application, but also if people would actually use this. So given that we were framing it as like a hypothetical period tracking app, basically at the end of the experiment, we asked, like, would you like a link to download this app? And that's how we got at their revealed behavior of intended usage. So some key findings that will quickly go over. So there were no significant differences across manipulated policy on any of the measured variables. So this is very similar to the previous project where the policy manipulations didn't do anything in terms of how people ended up disclosing. We found that trust fully mediated what was a positive relationship between perceptions of anonymity and intimate disclosure, which means that anonymity was fully, you needed trust for this relationship between anonymity and intimate disclosure to be significant. We also found that trust partially mediated what was a negative relationship between perceptions of persistence and intimate disclosure. And finally, another key finding was that the main probability of deciding to use cycle track, the period tracking application, was 1.95 times higher with increased perceptions of trust in cycle track. And so as I mentioned, we also collected some qualitative responses just to understand people. We didn't ask about postdocs, but we asked, for example, how has your period tracking app use changed in the past year? And we actually got some responses relating to the doc's decision. So there are a few that I just want to read because I think they're so interesting. One being, since Robey Wade was overturned and women's reproductive rights have been at risk in many parts of the country, I've made sure to only use my period app in anonymous or offline mode. Sometimes I don't put in all the days of my period or something like that on purpose, so not really wanting them to have accurate information. I have many privacy concerns, but I need the app, so I still love pretty much the same information as before. So we're seeing kind of a range of reflections. And this one I found particularly interesting because this quote basically indicates that even if period tracking app was to implement all these protections, they don't really trust that that's true. In the current climate, I don't trust this kind of information. It's not actually being accessed and shared online. So just really interesting quotes from our participants. And one of the other qualitative questions we asked was what would be the ideal period tracker? And a lot of their responses indicating wanting it to be stripped down as much as possible, offline, anonymous, options to delete, to store data. So really reflecting some of the affordances that we were interested in. This was reflected in what participants thought would be the ideal period tracking app. They also mentioned wanting to have educational resources, whether it be from other women or doctors on a period tracking app, so that they could better understand their cycle. Right. So I'll begin speaking to some of the key takeaways and then Alexis can also supplement them based on the second study. So some of the things that we took away, I'm sure some of you will have different takeaways from some of those findings, but first sort of headline for us was platform power. That is the extent when it came to intimate privacy and the impact of different measures taken. Either there was no difference in impacts between the government or legal regulatory measures and platform based measures, or the platform based measures had a greater effect. So one way of thinking about this is that at the frontline, if you want to deal with intimate privacy, protect it to have the most positive impact on users. It's going to be through the platforms they're designed. And I think regulations and laws that are tailored towards that. Rather than sort of standalone regulation that's over here. I think you want to reach users is going to be through the platforms as well. So maybe this finding actually justifies the institute we're doing in social media. The focus should be social media platforms and dealing with these problems. A second key takeaway really is the central importance of trust in the store. Trust predicted intimate sharing and expression. And also was a key outcome of some of the intimate privacy protections that we looked at, in particular for as we saw certain minorities. The importance of trust in both intimate sharing and expression of the literature tells us that, you know, this is not something that happens overnight. Right. And that's part of why we set some of the limits in the logic to study. If you build trust among users that over time, trust in their partners, trust within social groups and trust in platforms. Expression intimate sharing will come later once you've built that trust. So trust is should be the target for a lot of other policies and design affordances going forward. The form of intimate expression matters. What we found was that, for example, when it came to more social or public facing intimate expression, that's beyond just one to one partner sharing of platforms were more important, but there was greater stigma attached to that kind of expression, not surprising. It's different from our findings in the first study which focused on intimate partnership, which ended up with different findings that surprised us as well. And lastly, I think another key takeaway or key part of our findings is the importance and how intimate privacy protections benefit in particular women and minority groups, users who previously have experienced online abuse, which tracks with the course, the fact that these are also groups that we've seen also reflected in our findings are disproportionately impacted by the stigma and perceptions of risks and threats around intimate sharing and expression, both sexual minorities and other kinds of women groups as well. Yeah, I guess, well, I guess I'll just one thing to add to this is I think a common critique of intimate privacy is just to not share intimate information online like just don't put it online. Don't use a period tracking app. Don't do any of that stuff. And that I think our work helps to start thinking about is that shouldn't be the response. People use digital tools to talk, to express themselves, to develop, to learn. We need these tools. And so part of our work is attempting to start the conversation and try to develop solutions for how to build platforms with intimate privacy in mind. Yeah, and I think in terms of public policy matters. And this is coming back to some of the gaps in and not just the literature but research on impact. I mean, there are statutes across a number of states that attempt to address a certain kind of intimate privacy violation that's revenge for. And the challenge of course with laws and regulations and deal with intimate privacy expression is of course there's a political constraints where law enforcement and politicians has shown little interest in protecting it. Where it has been done there's also legal and constitutional restraints. Often these laws are challenged based on constitutional grounds for cement grounds. Now, revenge point statutes in a number of states have recently been upheld based on First Amendment, on First Amendment grounds, including most recently in Minnesota. So there are these legal and policy constraints on these kinds of laws. But I think what our finding show is, is that yes, these laws are important. They're critical part of the story. But also policies that are targeting social media platforms. If you want to have a positive impact on users and protect users in the sense that it's going to be design affordances. It's going to be reforms on laws impact on social media. And yes, something that Danielle has been advocating for is that one last looking at section 230 reform and we think our findings show the wisdom of that as well. Yeah. And I think, lastly, in terms of some of our additional policy recommendations. Another one is being that when we think about the actual victims, we see that intimate privacy protections, they help women and marginalized communities disproportionately impacted by this. But what that also means is they should also be the center of these design efforts and these legal and policy. So for these, we need to center these smaller minority and marginal communities. They need to be part of the reform process, the design process, we need those voices in this process and that has not been the case. Either in the broader reforms we're seeing across the country, the few reforms that we've seen, but certainly not on the design side. And that's what we really aim to with the second study is to center that. And I think that some of the last findings, and I'll just speak to one and Alexis, you did just in thinking through a lot of this long term that since trust is a key part of the story. And that you have to build and maintain over time. So that means you can pass new measures, you can have new design affordances, but they have to be enforced, they have to be maintained. It's not something they can have a one off over time. In the long run, you'll have greater protections. And in the end, I think users that are better protected more engaged and feeling more protected. And I think that so some of the other things we saw through is so at the beginning we mentioned that there's really a lack of empirical research that supports this sort of stuff. So I think more empirical research to showcase how to protect intimate privacy online, particularly, we did a longitudinal study. That was two weeks, and it was not based on a real platform. So it would be great to substantiate our interventions and some of our recommendations with longitudinal testing over an even longer period of time, maybe through field testing an actual product and seeing how actual behaviors change would be really significant to understand. In terms of design, understanding how we can stop violations before they can even happen prioritizing safety and privacy as a default setting without having people having to go through hoops or chilling their their speech to have the experience they want on online. And as I mentioned before, social media, as many of you are studying, I don't think can fulfill its real potential without protecting intimate privacy. This is protecting intimate privacy is a positive thing for social media because people will feel safer wanting to engage, being able to connect better and more so with with their network online. So just yeah, prioritizing intimate privacy on platforms, what I think is beneficial for both individual users, particularly these minority individuals that are impacted most, and also ultimately I think the platforms. Absolutely. I think we can we can end it there we very much look forward to your questions we're going to be continuing this research and we're thinking through what our what our next study might be. So we welcome plots on these findings and anything else. Before opening up to the room I'll just quickly. There's a good question online, which I know Danielle is a lot of thoughts on both of your thoughts as well. Would you recommend any changes to section 230 to address some of the harms that you're seeing studying. Yeah, I can, I can feel that more so. Yes. Absolutely. And I would say that some of the changes that Danielle has been advocating. Which essentially imposes some additional clear responsibility on platforms to better deal with a range of different content. But I think even if you were reform based reform based on our findings, it would be you focus on the most harmful kind of content. And I think content that relates to our intimate lives. So intimate imagery, most personal sensitive intimate information. There's a way that you can modestly modify section 230 that still provides for protections for the platform from suit and certain kinds of me also at long last it poses some additional responsibilities to deal with intimate content. That's going to better protect users I think in the long run as Alexis nicely there is better for the platform is better for the users clearly. It's going to lead to a broader cross section of users feeling more freely to speak more freely to engage that was finding we see here and finding that Danielle and I founded our 2019 study looking at cyber harassment laws, which are often critiqued as potentially having a chilling effect. We saw the opposite in that study to the opposite here, where these kinds of laws don't chill, but especially for users for disproportionately victimized for these kinds of violations, they're more willing to speak and engage. So often section 230 is defended as being key to speech and expression. In fact, with some modest reforms, I think we can see a broader more rich and diverse conversation and a more protected and engaged users. This is a very interesting and very informative talk. I'm not from the law, I'm from biology. So my basic interest in the subject. So now, when you mentioned about minorities in this country, you should also think that some of these minorities are majorities elsewhere. For example, Hindus in this country, and they are quite majority in India, and whenever it was required, they are able to impose some restrictions on the social media, of course, which is opposed by the minorities, but the majority is expressing their will. So the government was has been successful in imposing such restriction on the social media. Look at the graduate students who come from China. They do not know anything about Tiananmen Square. Even the people in this country know about it. There it is now. So I'm just wondering, unlike the gun control, which cannot be totally passed because of the social media restriction, is it at the same level of fundamental right or can it be restricted? So I think that can be a shot. So I mean, it's a great question and you're right. We have to be be careful whenever we're thinking through new kinds of restrictions that can be used. And I think you're speaking to the BJP governments in India. And you're absolutely right that the kinds of changes that Danielle talks about, and I think would be supported by our findings. We're going to be pretty modest and tailored and specific to certain kinds of more harmful content that speaks to the most sensitive kind of an intimate information that protects those fundamental privacy interests that we talked about at the outset. Certainly, I think social media concerns certain fundamental rights, of course, expression, engagement, association, all of these. But on the other side of that, what the privacy scholarship and research shows is that you need a foundation of privacy for all of those interests. Intellectual privacy and intimate privacy. You can't have with overreaching government and increasingly authoritarian government press in India. So you're absolutely right that we have to be careful with the reforms. But I think you can, we can do it in a limited way. And I think right now. There's almost a consensus across the political spectrum about the need for more accountability for platforms that maybe the left and the right disagree. What that looks like in the end, but certainly probably at any point there's more agreement on the need for more platform accountability. And I think maybe sometimes when it comes to consensus less is more than what comes to intimate privacy. I think the greater the protection, the more you're going to get out of it. There are political constraints and legal constraints. Danielle is well aware of it and her work and she discusses that. And I think it's a reality of our implications as well. But I think this can be tailored to avoid some of those abuses. Yeah, that's just to add a little bit to that. I think that the platform reforms like John mentioned would be modest like some of the ones that we're suggesting from our study. They wouldn't require the platform to necessarily feel restricted or restrict use. There are simply, you know, adjusting some of their design affordances so that users can feel more safe to express themselves. I don't think that the reforms would necessarily alter the dynamic of platforms in a negative way. All it's really doing is prioritizing the user without taking too much away from the platform. I think that there's a balance that like discussions with platform designers could find with what the users really want because ultimately like users need to use these platforms to express themselves. There's so much more that's happening online that what's happening offline that people like rely on these platforms. So I think eventually platforms will have to, you know, step towards what the users want to in order to find a balance. The existing platforms will have to otherwise I think the platforms will come and do that and take the old platforms away. And maybe just add one final final point. I think this also speaks to maybe the first question Nick that came from online. I mean, part of this is I think a lot of platforms are willing to do to be more responsible. I think increasingly that's the case, but I know there's going to be some platforms where intimate information and imagery like sites that literally monetize a marote and you can't get to those unless you have some kind of legal policy reform. I think the language that Danielle proposes and I think makes a lot of sense is simply requiring reasonable steps, reasonable measures to deal with this. And that's a standard that's familiar to us in all areas of law. It's not going to create uncertainty and blow up the social media ecosystem. We all have duties imposed by tort law to take reasonable steps. That's all that's being proposed here to deal with specific kind of content. Awesome. Two more questions from online. We'll hand it over to Dylan. One is, is it possible to access the studies that you're talking about or from online? So if you're planning to publish them soon or if there's preprints available. And then also a clarification in the key takeaway is you mentioned that the platform measures have an equal or greater impact in government measures. Could you please clarify if the impact refers to the likelihood of a user disclosing intimate information or the level of privacy protection provided to a user? So on the, I'll, on the, on the former question you can, you can take the, the former. Oh, sorry, what was the first question again? Access to the paper. Oh, access to the paper. Yeah, so sorry. The, so the, we have a paper that's based on the first two studies that's currently under peer review. So we hope to have that available soon and we'll put the data and everything up on GitHub as a part of that process. And we're really in the process now of deciding with the third study or the second project where we might place it. Yeah, so all hopefully forthcoming soon. And then on the second question about what the platform was, what the users were responding to better in terms of the platform versus the government that was in response to they were more likely to intimately express themselves online in response to the platform, the measure taken by the platform in contrast to that taken by the government. So they felt more, I guess, protected and feeling that they can express themselves more so as a result of the platform taking action as compared to the government taking action. Thanks so much folks. Question I have around trust. It seems like that's a pretty big part to a lot of this. I guess I'm wondering how do you square that like need, maybe desire for trust with the, I guess like general feeling that there's an erosion of trust, not just between institutions, both in the United States and globally. I guess to that point also, have you seen or heard of studies where folks are trying to test how to build trust, I guess, particularly with social media platforms, given that, I don't know, I think there's a point about folks, you know, like folks, platforms generally like care about the users, but like, there's also this dichotomy with platforms also using the users as like, they are mind and like whatever other other event force you want to have for it. So I'm wondering if you've thought about that. It's a great question. Yeah, I mean, I think that's a really tough question. That's like the question, how do you build trust when people just so badly don't trust the platforms and the government I mean we saw just that one quote and there were several other participants that expressed similar sentiments and that no matter what the platform says, there's still this inherent I don't trust you, because of the current political climate. And so I think that's where we kind of start thinking about how do we embed trust in policy, how do we like have platforms express loyalty to users through policy as required by policy and I think that it is kind of like a trickle down effect that even if we are developing trust at the platform level there might still be this need to have trust at the policy level and that's only going to come from more policymaking. Yeah, yeah, and I think what our finding show is that, you know, particular from from from study one that these in when we're thinking through a different kind of trust and not necessarily trust in the platforms with trust. In the social media and companies that are out there, but certainly trust to the extent that you trust to share this information with your partner via social media and other platforms that was impacted by these interventions, right, even though there was no difference between the government or the platform based measures. So we saw results on that count the trust was increased. They hadn't yet impacted on the actual, you know, immediately on the sharing and the expression, but in impact on the trust which is key to that over the long run. So, some of what we tested, I think is part of the story. But I think your question is quite right that it's about building in to the platforms. New affordances which we tested the first day, it's going to build that trust over time but it's not going to be I think an easy solution it's going to be comprehensive for the very reason that your question is that trust in institutions. And in platforms is that it's lowest point for good reason, how that trust has been violent, how our data has been misappropriated and violent, all of that the intimate privacy violations that we talked about that motivated the study is all a part of that challenge. And I think this is going to be industry being more responsible. It's going to be users educated. It's going to be. Maybe the general public also educated lawmakers and policymakers being more responsible about this in the long run so there's no simpler quick solution. I should add one quick note to that in terms of trust that in addition to trusting the platform I think John kind of mentioned this in terms of trusting the people on the platforms to. I think that platforms want users to engage with each other and it's hard to build that trust interpersonally when there's this lack of trust in the greater body that they're operating within. But I think it's important to also recognize and in addition to the institutional violations of intimate privacy, often like these violations of intimate privacy happen at the interpersonal level to. And I think that that speaks to the norms that have been established on the platforms where these sorts of violations are a normal thing to do is something that can happen. And so starting so building trust at the platform level and the platform designing themselves in a way that is no longer normal or acceptable to violate each other on an interpersonal level will then trickle down to these interpersonal intimate privacy violations as well. Absolutely. I mean one of the theoretical frame the theoretical key part of theoretical framework for this work is expressive law theory, which is a body of literature that Alexis described yesterday nicely which examines this other function of law we think of laws is a deterrent conduct, but there's a lot of empirical work on its expressive function that it sends a message when you implement protective measures both in law. And this research shows that sending symbolic messages impacts on social norms of changes behavior in the long term. One of the interesting I think outcomes of our studies is that platforms also have an expressive impact. Right. I think these protected measures deter antisocial behavior and that's a positive. But our research is also concerned with how implementing these measures sends a message about what kind of engagement and sharing is valued. What will be protected and the kinds of behavior increases the risk for engaging in behavior that these measures send a message of disapproval. I think in the long run it's not just about building trust, changing the norms and stigma around this kind of sharing and changing the norms about intimate privacy violation, which is the ultimate goal. Thanks for this really interesting. Kind of going off of Dylan's question I suppose. I guess for me, the track, following what you presented is okay you trust is good trust equals more engagement or engagement equals better bottom line for companies. How do you kind of square, you know, surveillance capitalism with privacy and increasing trust I can see that your research is probably very interesting for platforms as well. For me, I am not putting this very well but there's there's something there in terms of attention that I just don't understand how you could, if you can give me like specific examples of you already got through this. Yeah, like, I'm not, I think there's a difference between, you know, historical and qualities that come with that public private divide, especially around gender. But do we really want to be nudged to share more of this sort of thing with these platforms, like we really want to accept that they are the new public sphere and kind of ignore their bottom line. Like, for me, there's attention here that I'm not quite getting past. Sure, no, I think it's a it's a great question. And certainly the, the aim here is not to our research is not to aid surveillance capitalism and help companies monetize although, you know, I think one of the messages that we say is that like if you implement these protections. It doesn't is in contrary to the interests of platforms because that's often the kind of pushback that we get from the industry side. The idea here is, people increasingly share and increasingly express today. It's a reality that people use these platforms to connect with those individuals, and they're going to be sharing intimate information and expressing it. It's just a reality. It's not going to stop tick tock is not going to get bad. And so if that's the reality, and there's great stigma and perceived threats and risks, especially for different marginal and minority communities. So if we reduce the stigma attached to these this kind of activity that's happening, and it's key to mutual self disclosure, autonomy, self identity expression, all that is important it's now happening by these platforms. So if that is happening. How can we make it safer for different communities. And that's not to me in the end about helping platforms. Helping privacy and helping those communities. And that means not necessarily the aim is a nudging for people to be sharing more. The idea here is people are sharing more. How can we make that more safe and protected from disproportionate violations that are happening and in the long term change the norms. So there's less abuse and less parking. But I appreciate there. There is that reality that we're making a pitch to platforms. And, you know, if people are sharing more than you might say well that that leads to feeds into the civilians couple the machine. Our hope is that these constructions can are not feeding the machine, but helping people. Yeah, I think you put it great. I'll just add a little bit I think that some of the design recommendations also are trying to kind of pull back on the amount the level of surveillance capitalism that is currently happening at the platform level. So, with the example of the period tracking application the idea would be to identify people disconnect that allow data to be deleted after a certain period of time so that there isn't as much of the surveillance capitalism that we're so concerned about. I completely agree with what John said about like this is happening so how do we make it as best as it can be. I think it's, it's a little it's harder to envision like from like an adult perspective where this has happened as we have grown up. But our children are growing up in a world where this is what it is and this is the public sphere. So how can we make it safer for our children. Because there, there is no option really for them to just not to disengage and so that's why we have to keep talking with the platforms and how we design in the best way for them. Yeah, and I think the findings from the design study to like the feedback was some people change their behavior post dogs. And some of the participants some of them said yeah I'm not using my peer tracking app and we have an additional study that we're still working on that we look at impacts post dogs certain kinds of chilling effects and those findings track with that. But there's also clear indication that many users also although they have greater privacy concerns say I just need to use this app it's really helpful. So if that's the reality, how can we have policies and measures that may be mandated by law to ensure that that data is not information is not shared with their parties, including with law enforcement, including with other companies, and that that data should be anonymized. And it shouldn't have persistence over a period of time. So I think those kinds of measures you can see the concrete benefit to those users who can't disconnect are not going to stop using the apps, they're going to use it, and we can make them more protected and safe with these kinds of changes. Hi. Thank you. There's been a proliferation of apps tracking mental health status and also treatment, including like life therapy and stuff like that. And there's also already been abuses by the companies doing them. I just wondered whether you looked at that aspect in particular. And if so, whether there was anything kind of unique about it within this realm or whether it followed the same pattern that you outlined in your talk. Yeah, so with the study was focused predominantly on period tracking application but I did look, we did look a little bit at like just the digital health industry in general and how people have become really comfortable with sharing information with these digital health providers and companies as opposed to going to their doctor and that data is there. But we haven't necessarily looked deep into the differences between sharing mental health data and menstrual or reproductive health data but I'm sure that there are probably patterns similarities and also probably differences but that's a really interesting comparator that I think would be useful to explore. Yeah, I think maybe that's another sort of step we began because we thought postdocs and you're really interested given some of the concerns that were expressed immediately following the decision that it would be salient to look at that kind of app and because there was really literally nothing done, a little bit done on postdocs, privacy perceptions, but nothing done in terms of concrete now so we thought that would be salient and highly right now but I think a natural next study would be to look at these design affordances with health more broadly because I think that also overlaps with the core part of intimate privacy, right which is not just sexual privacy is not just, you know, a certain kind of application is more broad than that, and certainly health information goes to the core of intimate privacy. So we'll pass on a clock but there's a few last online questions if you guys are going to take those. Sure. Awesome, okay. First, are there any period tracking apps that meet the requirements that you recommend. Another guest is wondering about your ideas on privacy designation, especially related to your topic, and then perhaps a good question to end on. The first question, do any period tracking apps exist that meet all these requirements? No. So there have been a few studies that have evaluated policies and practices of a plethora of period tracking applications, all of which they find are sharing more than they need to or collecting more than they need to. So as of right now, there are no like ideal period tracking apps that are meeting the requirements that the participants in their study would want. So there's definitely a room for FemTech to evolve in a way that starts to prioritize the reality of the world post-obs. The second question had to do with privacy designations. Could you, okay. I don't know if that was a typo from design. Oh, okay. Or maybe Alyssa would have put on something more about this. That was, that's the question. Could you repeat the question just right here? Just your thoughts on privacy designation, especially on your topic. Okay, so I think it'd be the privacy designs, right? Because I'm not sure about privacy designations, but probably privacy designs. And I think that that speaks to some of the affordances that we tested in the second project in the third study that we talked about here, which looked at what we tried to do is we matched these affordances to specific policy proposals that you typically see debated, right, to be forgotten. Which is about persistence of data over time or preventing persistence over time anonymization. And I think in our findings, and the third was the new HIPAA DHS proposed rule, which would constrain how health, sort of kinds of health information could be available to law enforcement. We found with the first two affordances, the most significant impact on trust and trust being the mediating impact on the intimate disclosure that we tested in that study. So we think at least from those, even though there's, I think more research needs to be done, at least when it came to the period tracking, cycle track out that we created in hypothetical or in imagination. At least for that design, anonymization and lack of persistence were important design affordances. And I think that's one of the things that we did in the first two years of the process. And I think that's one of the things that we did in the first two years of the process. Anonymization and lack of persistence were important design affordances that we would recommend as a starting point for intimate policy protection. Yeah, and I think that, like John said, it's just a starting point. Obviously, we can't test all of the privacy enhancing affordances in one study. So I think that they'll give in the power of some of these design affordances and the mediating rule of trust that might pave way to testing some more theoretically supported privacy enhancing affordances. Yeah, absolutely. The last question, of course, is a great one, but a tough one. Because the reality is, and I think Danielle literally just came out with a new piece in the Yale Law Journal talking about how when it comes to online harms and online abuse, there still is a lack of action among governments and among industry. And although, of course, you see some, you know, optimism. So I guess some data points for hope in certain places, the fact that through advocacy of groups like CCRI, which I'm the advisory board for and Danielle is on the executive, which litigates some of these claims but also advocates for law reforms in states and have been actually designed the model statute behind a lot of the state adopted revenge porn statutes that have been adopted in states and have withheld First Amendment scrutiny within court so far, including, as I mentioned, most recently in Minnesota. So I think part of those efforts is public education campaigns. Right. Part of that is educating policymakers about the value for the general public in a particular disproportionately affected groups, these kinds of protections can help that they don't show speech. We saw no evidence of that. It's not about chilling engagement or chilling speech, which is usually why these kinds of measures are challenged and opposed politically. In fact, they have a salutary effect for women. So in the front of this study can increase trust which over the long term lead to greater expression in sharing for groups that are disproportionately targeted with intimate privacy violations so public education campaigns, along with educating law enforcement and other enforcement agencies who often even when you enact laws aren't doing enough. So part of this is ensuring that those communities understand the threat and the harm to do action. And lastly, I think education for users as well to take steps to protect themselves to understand the threat that's out there. Like, for example, we found in one of our studies and study to where users who are more often victimized by online abuse. Also, we're more likely to correlated with greater engagement with intimate expression. So there you see the clear risk. If you engage in this public facing expression, you're going to be targeted this kind of abuse. And so, you know, people have to understand better these risks, that's for privacy, we're generally been in particular And if users start voting with their feet, again, post-dobs there were some elements of user switching period tracking apps, for example, and seeking out apps that had better privacy protections. Post noted, at least for a period of time people chose certain browsers that had greater anonymity. So the public, when there's media coverage of these kinds of threats and harms, there is shift that people will vote with their feet. And I think when they do platforms will behave more responsibly. But we need to push lawmakers. Awesome. Well, thank you, Alexis. Thank you, John. Thanks to our in person audience and also all the great questions from online. We're here next week and the week after we hope to see you guys then. Thanks so much, everyone.