 All right, we're gonna get started. Thank you everyone who's joining in person and online for the first RSM speaker series installment of the semester. We're really excited to welcome John Lucas Frajini from Boston University. But before John Lucas takes the floor, he's gonna be introduced by Joe Walther, one of our visiting scholars this term. Joe is the Bergelstein Presidential Chair in Technology and Society at UCSV. He's also a distinguished professor of communication and the former director at the Center for Information Technology and Society. And with that, Joe. Thank you, Nick. And thank you also, Nick, for arranging this speaker series and providing me for John Lucas to come today. I'm a fan of John Lucas's training. He does computer science work that I can understand, and that's saying a lot. But John Lucas and his colleague, they've been looking at a lot of the things that can serve us online in terms of hate speech, misinformation, conspiracy and so forth. And it's not hard to find an instance of any of these things. But so, Luke and his colleagues have uncovered the process of this, how the actors responsible for this move across domains and move across platforms and what some of the content is that they move and coordinate in their work. When I discovered their work, it blew my mind. It was really nice, solid, evident, illustrated in really fascinating ways in the papers that they write to just help blow open some of the social dynamics that otherwise are only represented by static instances of hate messages and so forth. So I'm impressed in the big fan of his work. And then I asked John Lucas for his beta. Now I don't like him anymore, because I'm jealous. He has recently tenured as an associate professor. He has scads of publications and conference proceedings and articles. He has millions and millions of dollars in grants. He's got anything that we would ever want an academic to do. But beyond that, as I said, his work is meaningful and exciting and detailed and precise and tells us a whole lot. So thanks for coming today. And thanks John Lucas for joining us to tell us about your research. Thank you. Thank you so much for the kind introduction and good morning or good afternoon, everybody. It's great to be here. So as Joe said, I'm a computer scientist. So what I do, I develop computational techniques to better identify and understand malicious online activity. So I started looking at malware, online fraud, spam, this kind of automated bad activities that we're all familiar with. More recently, I was drawn into this rabbit hole, if you like, of online hate and conspiracy theories and misinformation and so on. So today I want to give you an overview of some work I've been doing with my students and my collaborators on better uncovering what's going on out there. And to start, I want to give you an example of something that happened to me a couple of years ago now. So one day I was sitting in my office and I started receiving weird emails that look completely random. So the first one was this email where somebody was asking me for advice on how to cope with a recent wave of hate and bigotry and antisemitism and all of that. This was early 2017, just to put it into context. Okay, kind of random, but I had published some papers on the topic. So it might make sense. Then I started receiving other emails. So this one was asking me about help on some class project for a class called Community Studies Project 12, which I had no idea what it was. Okay, weird. And then I started receiving hate emails, wishing me all sorts of bad things and so on. And this was in a short timeframe. So what was going on here? Well, later on I discovered that someone posted this screenshot on Fortune, on the political incorrect board on Fortune. And this was, you know, an anonymized blurred out screenshot of a class assignment where a professor was asking students to go on the community and start posting about controversial topics, you know, gender issues, LGBT issues, women's rights, all these kind of things. And if you are at all familiar with the community, you know, that's a very much out-fried community. So they were not pleased with this. And they also posted this screenshot, they blurred out the email address down there, but they didn't blur it out completely. So you can see G at something.edu. So they interpolated it in the thread and they got convinced that it was me for whatever reason. So they found an old email address of mine that sort of fit in there. They mistyped it, but that doesn't matter. And so they decided to go and attend. So, you know, they put out some of my public information, they called for a text, and then they started posting templates of emails to send it. So this was the email that I received, the first email I showed was posted as an example on Fortune and I later received the same exact email, eventually they figured out it wasn't me and they stopped, okay? But this gives you an idea of what these coordinated harassment attacks look like and what we call a raid. And, you know, as you can imagine, if I receive this kind of attack, as a white male professor, whatever, you can imagine how much other types of demographics might be affected by this. And after that, you know, this kind of blew up. Now, if we hear about it every day, we hear about, you know, celebrities and gamers, all sorts of people being attacked online. And so we set to study this phenomenon in our work, and we found that it's quite common. So in our limited view of, you know, the political incorrect board, we looked at attacks targeted against YouTube, specifically YouTube videos, we find that it happens about two to five times a day, which, you know, it's quite a lot, but it's only one community. It's fairly small, comparatively speaking, to, you know, other platforms and so forth. So what are we going to see today? I want to give you an overview of four projects that I did. So the first one is these coordinated attacks against YouTube videos. So somebody posts a link to a YouTube video, but for whatever reason they don't like, you know, it might be against what they believe, they might want to harass the person and so on. And basically they invite people to storm and disrupt the video in the comments. The second one is about Zoom Bombing. So that's a similar threat that kind of emerged during the COVID-19 pandemic. And these are people who go and disrupt Zoom meetings like the one with, I mean, you know, hopefully since then some mitigations have been developed to sort of make it more difficult. And then I'll talk about some other work I did on automatically identifying online risks in private conversations online. So cyberbullying and sexual solicitation and things like that. And finally, I will conclude with the work that we did in looking at what happens after communities get the platform because the most natural way of dealing with online hate and conspiracy theories and whatever is just suspending those accounts, suspending those communities and so on. But these are real people so they will move somewhere else and what happens then? That's something that we need to keep in mind. So to begin understanding what we are talking about, let me tell you a little bit about 4chan, which is the first community that we studied. So 4chan is an image board and so people can post an image that starts a thread and then they can discuss it. It's organized under a bunch of topics, various levels of weirdness and toxicity and so on. What we focus is the political incorrect board, which is one of the most popular communities that creates havoc on the internet. They did things like turning Microsoft's chat bot to a racist a few years ago. They organize our source of raids. They claim that they sort of were extremely influential in getting President Trump elected through these kind of meme wars and so on and so forth. What's interesting about 4chan compared to other communities out there is that at any point in time, there can be only a certain number of threads active. And so every time a new thread is created, a thread has to die basically. It's archived. And after it's archived, it goes into this archive for a week and then it's deleted. So content is also ephemeral. And also users are anonymous, there are no accounts so anybody can post and there is no way of telling who's posting what. And as you can imagine, this combination of anonymity and ephemerality sort of encourages all sorts of bad behavior, especially when there is no moderation from the administrators and so on. So just to give you a sense, we find that the medium lifetime of a thread on 4chan is 47 minutes, so they are fairly short-lived. And we started collecting data from 4chan. We're still collecting it. So this is from a couple of years ago, we had over 134 million posts. So it's fairly large, not as large as Twitter or Facebook or Instagram or whatever, but it's still very large. We're still collecting this kind of data. And then we started to focus on this coordinated paid for harassment campaigns. And the way this works, as the example I showed you at the beginning, somebody will post a thread calling for an attack and then people will get together and carry out the attack. So what they might do is doxing the person, finding more information about them, social media and those or even worse, private information and so on. And then they will go and execute the attack. So they will post hate speech and all sorts of bad content and so on. And ultimately cause harm. And in this process, many dynamics emerge. So they will post evidence of what they did. They will say, did you see that? Did you see how they reacted? They will congratulate each other and so on and so forth. So how do we automatically trace this? So we decided to focus on YouTube videos because we found that it was the most popular domain linked on the platform. That's true for any social media, probably YouTube is the most popular content out there. And we found examples of attacks called against YouTube videos. So they will post a link to the video and then people are supposed to go and attack the poster in the comments. And if you remember, threads on Fortune are ephemeral, so they have a lifetime. So what we were interested in understanding first was can we see an effect on the YouTube comments after a link is posted on Fortune? So if a link is posted on Fortune, we expect people to click on it, go and comment on the YouTube video. And since the thread is ephemeral, that makes our life somewhat easier because we can sort of see between zero and one, that's the lifetime of the thread. How many comments do we see? Do we see a spike in comments? And we do. But that's not telling us much because that's what we expect to see in any social media platform, right? Once somebody posts a link somewhere, we expect the reason why they're posting it is for people to go and visit it and engage with it and so on and so forth. So we needed something slightly more sophisticated. And so we resorted to a signal processing, essentially. So the idea there is that, how can we simply model this behavior of having a synchronization thread where people coordinate and then the YouTube video were actually posting comments. And cross-correlation is this metric that is used to model how synchronized two signals are. So what we did, we modeled the comments on Fortune and the comments on YouTube as signals as a train of impulses and whatever. And well, we use this mathematical technique to find out how synchronized they are. Ideally, somebody will post a hateful comment on YouTube and then immediately go back and report on the thread. This is what I did and this is what happened. In reality, it's a little extra than that, but you get the idea. And this is interesting because it gives us a number, which is the lag between the two signals, but then we can use later on to automatically identify videos that we believe were red. And what you see in this plot is basically, we have a synchronization here, so zero indicates perfect synchronization. And then we look at hate comments. So we ran the comments through the hate base API, which is an API for identify hate speech. And so the dots in red contain hate, the ones in blue do not contain hate. And what you can see is that basically, the smaller the lag, the more synchronized the fortune thread and the YouTube comments are, the more hate we see. And if there is really no synchronization, there is really not much hate. So that allows us to extract a set of videos that have been targeted just by looking at this number. We don't even need to look at the comments, we don't need to train classifiers or large language models these days to identify what's hateful and what's not and so on and so forth. So that's fairly accurate and quick to do. And then by looking at this set of videos, well, we made some observations. So we find that, as I said, people like to report back. So they will talk about what they did, they will talk about what happened in the attack, potentially how it affected the victim. So some of these videos were actually live streams. So this could be done in real time instead of just being a video posted there to be watched later. We see that they overwhelmingly target vulnerable demographic. So we see attacks called against women, against members of the LGBTQ community, against minorities, and so on and so forth. And then in some cases, we see that even if the attack does not start as targeted, so it's not calling for an attack against the specific demographics and so on, the commenters will go after other commenters who may be part of a certain minority. So sort of the attack becomes targeted as it goes. In an opportunistic way, if you like. We see a lot of concern trolling, which is sort of interesting. So these would be attackers who pretend to be real concerned citizens about, sort of what I showed you at the beginning, when the person pretended to be concerned about racism and so on and so forth. And they try to get the victim into either contradicting themselves or saying something controversial or whatever, or just wasting their time. And why is that interesting? Because it's very difficult for moderators to identify this kind of behavior. It looks legitimate, it's just not. And what do we do about it? It's not necessarily against the terms of service, it's a hate speech. Actually probably it looks exactly like real people being concerned about an issue, at least on the surface. And then we also find that this is a trolling community. So people troll each other all the time. So oftentimes they don't act upon this stress. They say, we are not your personal army, NYPA is something you often see. So we don't want to participate into this attack, go away and so on. So it's kind of a multifaceted landscaping. And then we want to understand what can we do about it? So we have this set of videos that were attacked and clearly YouTube has a hard time moderating them. Because we all know that content moderators are a scarce resource, they're overwhelmed. There is so much content being generated all the time that where do we even start? So we developed a system that basically given a video extracts information about the video. And so this would be the title of the video, how long is it, what are the tags? We look at specific keywords, cops, woman, whatever. We look at images. So we look at the thumbnail of the video. So we have a deep learning model that can tell us what is in the thumbnail. And then we extract the audio and we look for keywords in the audio as well. And the idea is can we take all the signals and combine them to identify videos that are likely to be moderated, to be attacked. So if that works, that could be an additional signal for YouTube or whichever platform you want to protect. Telling content moderators, this is a video that you should pay particular attention to because it's likely to be attacked in the future. So this could be a prioritization system that could be used by content moderators. And it works reasonably well. Any questions so far? Yep. On the previous slide, the bottom statistic about the AC of 0.94. Yeah. Have you applied that forward looking and then seen if the videos you predict will be rated R rated or is that purely retrospective? This was purely retrospective, yes. Because it wasn't easy to extract this ground truth of videos that will be attacked. Yeah, but that's a good point, yes. Also it might overfeed, so it might be biased on videos that were attacked by that specific community. So there might be other communities that target other types of videos. The second question, can we predict which videos we have in example, whatever is shown on the screen is theoretical or hypothetical, do you have a specific example? I'm shorting so much. No, it says that people are up there, people online and they can hear you, even if you don't shot. Yes, yes, yes. We have examples of videos, so this would be typically videos about social justice or related topics that would get specifically targeted. I think you are giving only a hypothetical. Just give a very specific example wherein the YouTube removed the particular item. Oh, where they removed, they don't remove the particular item but they would remove the hate speech. So what we're looking at is really hate speech in the comments. So again, I keep asking the same thing, do you have a particular example? Yes, yes, yes. So I don't have it on the slides but what the attackers would do, they would go and attack the creator of the video with all sorts of slurs and hate speech and so on. Yeah, okay. I have a good question. My question is related to the, maybe we'll enter this in a later stage of the talk with a higher strategy of the prioritization of the content itself and not necessarily the attack. Is that because of the space in which the hate and harm happens that this prioritization is particularly useful so that moderators are more out in the look but it seems that it's kind of like a remediative strategy instead of a preventive run, right? It would be a remediative strategy but it would allow moderators to look at the risky comments in a more timely fashion, right? Because the problem with content moderation is that there is so much content out there that it takes a while to act upon bad content especially with online harassment and so on. Once something is posted, if you don't get it very quickly, the harm is done. Sure, YouTube goes back and removes these comments but if they do it days later, it's not really very helpful, yeah. So then I wanted to talk about, oh yeah. I have a question about scale. Yeah, have you looked at the breath of this trolling raid activity from 4chan to YouTube relative to the entire sort of inter-platform type of practices that are going on across the breath of 4chan and also relative to any, I wouldn't say non-normative kind of indirect cross-platform behaviors but maybe even pro-social kind of behavior. So it's relative to the whole scale of cross-platform activity and the pro-social side of cross-platform activity. How does this sort of trolling raid-like behavior scale? It is a minority. I want to say among all the videos we saw on 4chan, it would be in the order of 1% or something, the ones that are actually attacked. So most videos are posted for whatever, this is a funny video or whatever it might be. Most of the inter-platform activity is news. So people will post news articles and then there is some inter-platform activity because people will go and post comments. We did some work starting the hateful comments on news articles. We find that often news articles are taken out of context and just posted on 4chan to make a point, especially for immigration issues or things like that. Yeah. So now you say it's the management of YouTube or Facebook. Even government agencies, they try to keep track of such issues, right? Yeah. And again, even for them, it should be a very major task and they'll be able to fully solve the problem. Yes, especially if you are doing it in a cross-platform fashion, there is so much data and using this kind of techniques but just look at synchronization might be more robust than looking at text that people can try and change the way they speak to avoid being detected and so on, but it's a huge problem. Yeah, definitely. So yeah, then after doing this work, we realized that a similar problem was happening on Zoom. So we saw news reports and so on or even in our own online classes, they will get disrupted by trolls. And we found a similar pattern of these Zoom links being posted on 4chan. We also looked at Twitter, so there will be posted on Twitter as well with an invitation of going and disrupting the meeting. Okay, and the difference there compared to what I talked before was that these are real-time events. So you cannot attack a Zoom meeting after the Zoom meeting has ended and you cannot, yeah, so it has a smaller window of opportunity. And the other thing is that we can see links being posted on 4chan or on Twitter for Zoom meetings but we cannot really see what's going on on the Zoom meeting. While on YouTube, we could go and check the comments. So we applied a slightly different technique here. So we had basically multiple annotators go and check the threads to tell if it was a call for an attack or not so that then we could characterize what was going on there. And we found 134 calls for Zoom bombing over a period of a month, if I remember correctly. So it was, that was the beginning of the pandemic. So it was a problem that was kind of surging. And what did we find here? Well, we found that the huge majority of attacks here are calling to disrupt online classes, online lectures. So it would be high school classes or college classes. Everybody went online, so it was very common and nobody knew how to secure these things. The second interesting thing, we found that 70% of these attacks were called by insiders. So it would be legitimate students in the class asking strangers to disrupt their own classes because they were bored. And then we also found that most of them are targeting meetings that are happening in real time. So there was wording in the thread saying, my class is happening right now, come and disrupt it. So these were not really planned ahead of time. And all these are interesting because when defenses for Zoom meetings were developed, they were assuming a completely different fact model. So they were assuming that somebody from outside would come in and disrupt the meeting. So they would tell you, set up passwords, but if your attacker is an insider, they can just give the password to someone else. They would tell us, set up a waiting room so that you can tell who's real and who's not and keep out attackers. But actually we found many instances of students saying, oh, here is a list of the names in the class. So you should adopt the name of some other student so that the teacher wouldn't be able to tell who's an attacker or who's not. We see that oftentimes the attackers are taking the name of the host so that the participants are confused if they don't know what's going on and so on and so forth. So after a while, services started developing like one time unique links for each participants but this is not very common and it's also typically a paid option that they offer. So I don't know if it's still the case, but for Zoom you could only do that if you had an enterprise account. And so that wouldn't protect, later on we talk to many people from volunteering organizations or who would have volunteer ran Zoom meetings or seminars and so on. It wouldn't be an option for them to adopt this kind of defense. And other area that I looked at which is sort of similar in spirit. So we were looking at private conversations on Instagram. So everything I talked about so far were not the Zoom bombing, but the YouTube stuff happens in public. So we wanted to understand what happens in the private space. What kind of risks and malicious activity are users victims of? And so here this is part of a large NSF project with several other institutions. So what we did, we set up a data donation for Instagram. So our participants in particular teenagers and young adults could donate their data to us and they could also label the data themselves among what did they think was safe, what did they think was unsafe, what made them feel uncomfortable and so on. Which was interesting because there is a lot of results showing that if you ask external participants or external annotators who are not the victims themselves or are not in the loop, what they end up labeling doesn't really match the experience of the victims. And so we had over 28,000 chats with millions of messages. Some of these chats were labeled as safe or unsafe. So we wanted to understand what do unsafe conversations look like. And so the first thing we found is that just by looking at again metadata timing information and so on, we can tell with fairly high accuracy whether a conversation is gonna be risky or not. And why? Because the participants tend to disengage. So if they start being bullied or they start receiving solicitation or something that they don't like, they will stop responding. If they are being bullied by multiple people in group chats, there will be this sort of one way stream of messages. And why is that interesting? Because that potentially allows us to develop detection systems that don't really go and look into the content of conversation. So they are more privacy friendly. They could be applied when there is end to end encryption. So we cannot see the content but we still want to do something to protect participants. And so we developed again a set of classifiers. So we looked at metadata. So everything I said, but doesn't look into content. And then we also looked at text and we look at images and so on. And what we found is that, as I said, the metadata classifier is fairly accurate by itself. So potentially it could be used at least as a strong indicator that something wrong is going on. Then obviously using all classifiers looking into text and so on is more accurate. And then we also found that if we wanted to identify the specific type of risk, so we have several types in our work, in a bullying, sexual solicitation, hate speech. We also look at spam and things like that. Then we need to look at content. Yes. What sort of metadata were you looking at? So it would be timing of the messages. It would be the length of the conversation. Sometimes the participants will specify the relationship with the other party. If it was a stranger, a friend, an acquaintance, a partner or whatever, some information like that. So yeah, if you want to look at the type of risk, we need to look at all of that. But this sort of gives us a direction to look into. Maybe this metadata features are promising to be adopted in a scenario when we want to guarantee more privacy to the parties, which is what we all agree is good. And most messaging platforms are going in that direction. So we have end-to-end encryption and so on. Finally, I want to conclude with this other project in which we looked at these challenges that deal with the platforming specifically. So coming from computer security, we used to work on anti-spam, anti-fraud, anti-malware, whatever, and so the solution was always the block content, delete content, and the problem will go away. In these cases, it's different because we're not dealing with automated programs, we're dealing with real people who can reorganize, go somewhere else, potentially go somewhere else, out of sight, and so there can be, they can get radicalized more, those communities are not moderated, and so on and so forth. So we looked at two communities that were the platform from Reddit at different points in time. One was the Donald, which became infamous for sharing conspiracy theories and all sorts of disinformation and so on. At some point in time, in 2019, it was quarantined, so people could still access it, but they would receive a warning, and so that's what said the administrators off, saying, oh, they're after us, they're gonna ban us, let's create our own community, the Donald.win on their own website, and let's migrate there. And then later on, the community was actually banned on Reddit, and so people moved to this new community. And the second one is our incels who are the involuntary salivates who've been linked to misogyny and acts of violence and mass shootings and whatnot. They were also banned, and they moved to their own forum. So we wanted to understand what happens to these new communities that are now somewhere else, and they are not on a mainstream website, so there's probably no moderation or very lax moderation and so on. So, yes? If they are not on the mainstream website, then they are effecting this, not that significant, right, because for people like me, I'm not going to find that in general. Yes, but they could coordinate and come back to YouTube, for example, to mount attacks, or they could, you know, if there is a link between online hate and offline violence, they could organize acts of violence offline. So they themselves interact? Yes, and then they can go, if they are building a conspiracy theory, then they can go and push it to other communities. Because I was comparing the situation with money swindlers, you know, a company cheats people of money. Then, when they shut down, then they go somewhere else with another name. Yes, yes, yes, yes, yes. But here, you know, the web is all interconnected, so they can still come back in other forms. So what we did, we did a regression discontinuity analysis, so this is an example from the Donald. We have the same thing happening on incels, so the line in the middle is when the migration happened. And what we find, first of all, not everybody migrates, so the number of users on the new community decreases. And why is that? Well, likely because Reddit is a general purpose community, they have so many subreddits and people kind of, you know, various interests. So they might be tangentially interested in whatever the Donald is talking about, but not enough to, you know, create a new account on another community or go and check it and so on. So only the people who are really committed would go. But among these people, we find that in general, their activity increases, so they become more active, yes. Hi, my name is Glen. So this part where you said the othering, is it through a lexical content analysis, looking at words like they and them? Yes, yes. So we get into that, we find that when we start talking about we a lot more about them and so on, we find that hate speech in general increases. So yeah, not only their activity increases, but potentially the community becomes more insular and, you know, gets even more polarized, because now there is only the people who are, you know, like-minded, there is no content moderation and so on and so forth, yes. Yeah, I have a question about, so could we assume that people who choose to migrate are not, like, exactly comparable to those who stay? Then could we assume their trend is continuous that enables us to use RDD? To use RDD? Yeah. Because I assume that people who are migrate, who choose to migrate, they are more dedicated to this topic or more polarized. So can they really qualify as a counterfactual of others? Well, not necessarily, but what we wanted to understand here is how does the community change, right? So now the community will only be made of extreme people or committed people and will be outside of the moderation capabilities of RDD in this case. So what we wanted to understand here is, you know, what happens next and, you know, this opens up many research directions on understanding, you know, is the platforming of these communities in this shape and form the right way to go, you know, probably the toxicity of RDD decreased overall. So if that was the goal of RDD's administrators, you know, they succeeded, but overall as a society, these people didn't go away, you know, they just moved out of sight. So that's what we need to keep in mind. So to conclude, you know, show the number of examples where computational methods can help study and identify coordinated online aggression. And I pointed a few directions in which, you know, these kind of techniques could help platforms develop better mitigations. But once we do that, we should also keep in mind this potential unintended consequences of, you know, just if you ban people or remove content, it doesn't necessarily go away, it just goes somewhere else. And with this, I thank you. And I'm open to questions. Thanks very much. My name is Adam. We're in a blue stripe shirt. I'm questioning about the Instagram private messages piece of your research. And I appreciate you providing all the detail about where you derived your initial data set. But given that it's explicitly about a tool that you've developed to work on private messaging, what do you see as the use case for whatever it is you've developed? I mean, I can spin it out, but I would love to hear what you think because it's not like Facebook can use it to monitor what they could, but we don't want it to private messages, you know. So where does that better prediction come into any sort of utilitarian play? Yeah, that's a good point. I think in general there is always a tension and a discussion whether platforms should read your private messages and look for illegal content or whatever or protect their users and so on. And in general privacy is obviously a big concern. So that's why people are advocating to move to end-to-end encryption and so on. So what we wanted to understand there is can there still be some indicators that can be used without looking into what is being said or even who we are talking to and so on, right? Okay. So in the data collection piece for the same study, I know you've had users volunteer the private messages and you did that analysis. Was there a strategy in approaching the platforms to see if they would use some of the data that you would need to do as a researcher or was it more of a strategy? Yes, so... What's their response on it? It's always tricky and in general, getting user data from platforms, especially if it's not public data, it's something they will not do. And that's one of the reasons why we had this volunteer base. Approach in addition to the fact that we could, they could label their content and tell us what was risky and what wasn't because if we did it ourselves, probably we would end up with a very different definition and then maybe whatever we developed wouldn't really work. Yeah. So I wanted to look at the discourse raised here in your conclusion to ask a few questions. By effective mitigations are you only interested around the content moderation practices because there is an argument that has been made that there could be another form of mitigation around amplification and incentivization. Yeah. But that would then mean looking at different sets of practices that may be more positive or more pro-social. Have you considered any of that type of... So once you identify this content, you can do various things. So I have work that looks into this information space that looks at soft moderation. So you don't remove the content but you add a warning label. So that is one thing. There are these shadow banning or demoting content, not showing it in recommendations and so on. That's something that could be plugged into this kind of systems. So instead of deleting the content or removing the content, you just don't show it to somebody or you use it as an input to your recommendation algorithm and suddenly that content is not recommended anymore, et cetera. Sorry, just to follow up, I'm just wondering if there's any means with which there are other ways to incentivize platforms around doing non-less negative, less bullying, less presidential work. Yeah, that would be great. It's very difficult to get feedback from them and also see what happened with Twitter in the last few months. I don't think any content moderation went away or at least from what we can see. So yeah, obviously it would be great to have a more safe environment. I'm not sure where the priorities lie really for different stakeholders and so on. We have a question from Augusta online. Did you study the growth of the new communities over time, in particular active recruiting on mainstream platforms with LinkedIn, YouTube video comments, et cetera, and if they grew, did they stay more toxic or have more othering? We didn't look into that specifically. For this study, those communities grew in general in the sense that the dot-win type of community attracted more of the platform communities. So it became a conglomerate of communities instead of just being the Donald. So that we observed, yes. Interesting that you raised... Are you okay? Yeah. I was actually one of the council members on Twitter since responding and then I resigned and must actually instigate a harassment against me and the two women that they fight over the else. So this thing too, for me, I like what you're doing here and also I'm working on some similar methods. I do that a lot because we're in the US and it's very US-centric technologies that we're always looking at the content. So I like some of the work that has been done elsewhere where it's like actors, behavior and content ABC and the focus is overwhelmingly on content and what I was really looking at, say, in literature is known as amplifiers like influences, right? And so the categories of actors and how we detect who they are and whether they impact, like, for example, swarming and what does that mean? And so I like to have a discussion afterwards too about actors and behaviors because I feel that that needs to be looked at more and then the impact in terms of the First Amendment and what we can do about it. Yes. And because we're at a law school, I've been looking at Laura's last six words and what he was saying about replica behavior and how that pertains to the First Amendment. And he was saying that if technology changes so much and impacts democracy, then we should do something about it. And so for me, this is the missing bit of the conversation that I feel is a suicidal effect, but it's so siloed that we actually may not look at it in a very integrated way. Jack, thanks. Thanks, Rebecca. I do have a couple of questions. The first is on the slide that you showed us that showed various stages on the chat. Over in the far right, you had one individual who was a victim. And your examples about when you and yourself were victimized, that was personal and individual. Yeah. When these YouTube attacks, the YouTube rates take place, then these are more or less public postings rather than private postings, correct? So I wonder if you have a sense of proportion about that. Are the rates typically for posting in public spaces or do they frequently involve the doxing and finding a way to disrupt the individual within their own private communication space? I think that public ones are more common because they're easier to carry out. It's easier to find public information about people. It's easier to get traction. It's easier to show that you did something because it's out there now. So they both happen, but if I had to, I don't have hard numbers, but if I had to guess the public ones happen more often, yes. It takes more preparation and more orchestration to truly dox someone and go after their private communications and so on. Thank you. One more question, please. You've mentioned how and when you've spoken before, particularly in regard to Zoom bombing, that one of the comments that the Zoom bombing post tend to be racist, they tend to be misogynistic, but it was your impression that those comments were not born out of prejudice and enmity, but were rather the most disruptive things that somebody could come up with to say. Yes. I wonder if you could elaborate on that. Yes. So we had some follow-up work that's not published yet, but we spoke with victims of Zoom bombing and several people told us that they're not in the mind of these attackers, but their feeling was that they just joined to disrupt the meeting and then they found the most low-hanging fruits. So they started harassing people of color or if a speaker was a woman or whatever they could find. But that was not the original goal of the attack. We didn't target the meeting because the speaker was a certain person or anything like that. Turning back to the YouTube breaks, then, which you said, for example, as a social justice video was likely targeted. Do you think it's the same activity going on? In other words, do you think the culture war is really being fought on YouTube in these grades that people who are very opposed, very ideologically opposite what's in these videos are doing these attacks or are they just really easy sitting ducks for major disruption? So do you think they have a different character, in other words, than the Zoom problem? I think so. I think that at least the way we saw it, for the videos, they typically would talk about what the video is about. So there would be a more targeted purpose. For the Zoom bombing, it was just that all these high schoolers were bored, at least early in the pandemic and they wanted some thrill. Yeah. Thank you. I think we have time for one last question. My answer is a comment that Joe's question, the last question on the YouTube race, this is the Zoom bombing. I would assume what you just said too, because during COVID, it was boredom on the behalf of those people taking part in Zoom, such as maybe high schools, but also more than that too, right? But YouTube race, I think, would be characterized by presence of amplifiers, influencers who are coordinating it by doing a mock video. I'm mocking you. I'm using coded language, and my followers are not going to harass you. So I think there is another link there that there is someone who's openly then mocking a video to make that actually happen, whereas Zoom bombing is more like everybody and then do it together. There's no one head honchos. Yeah, that's a good point. On four-chance specifically, since everybody is anonymous, there is sort of this high mind or whatever of these attacks, but the same rates can happen in a number of platforms, yes. So there could be the influencer directing their followers to harass someone, and yeah, that happens too. All right, well, thank you everybody, and one last round of applause for John Lutka. Thank you. Thank you so much for joining online, and if there's more food outside, I think we want to help themselves to more lunch. But thank you, everybody. Have a great day.