 at Law School. Take it away. Thank you, Daniel. As you said, I'm Vivek Krishnamurthy, and you may remember me from such Brooklyn client lunches as yesterday's lunch. But I'm here to introduce myself. I'm here to introduce my great friend, collaborator and colleague on a bunch of different ventures, Arturo Carrillo, who is professor of law at George Washington University Law School, who directs the International Human Rights Clinic there, and came to working on law and technology issues and human rights issues from a really human rights perspective. Having spent many years working for the UN in El Salvador, amongst other places, on more traditional human rights issues before entering the Legal Academic Academy. Originally from Columbia, the country, then ended up at Columbia University for years before I ended up at George Washington University. And today's talk is really, the title is a more perfect internet, and part of that is clearly, might veer into the technical infrastructural elements of what we're talking about, but really we're talking about the problems of civility and incivility online, and when incivility can lead into more dangerous and harmful forms of behavior, which Arturo has defined using the term of cyber violence. So maybe we can just kick off. Arturo, thank you so much for being here. Well, thank you for having me. For joining us. So the format of the event we're just going to, Arturo and I are just going to have a conversation for 25 minutes, half an hour, to get some of the issues on the table, and then we'll open it up to questions, comments, thoughts, et cetera. So maybe you could just start by your categorization. So your idea that there's a spectrum of cyber violence and explain it to us and how that's helpful. Yes. Well, again, thank you for having me to the Burbank Klein Center and to Vivek for moderating. So I want to talk about cyber violence as a spectrum, and on the sort of low grade end of it, we have what we could call digital incivility. And on the more aggravated end of the spectrum, you would have things like cyber warfare and cyber security understood as a tax on infrastructure and that sort of thing. So I'm not going to really be talking about those most aggravated forms, right? Cyber warfare or cyber security. I'm going to focus on digital civility and then cyber violence, which is everything that flows from digital incivility when it becomes more intense, especially in terms of harm to affected persons, but doesn't, you know, fall into this other area that I'm excluding. And so digital civility, right, generally can be defined as being mindful of your presence online and the presence of others online, right, and how it affects that engagement. And, you know, it encompasses all kinds of conduct and everything from, you know, the most common kinds of digital incivility are things like name calling or trolling or unwanted contact, right? This is studies done by the Pew Research Center and Microsoft most recently show that, you know, there is a problem of incivility online. And when it rises past a certain threshold, which I'm increasingly defining as the harm to people that are affected by this violence, it can become, you know, what I would probably call interpersonal cyber violence. So interpersonal cyber violence would be made up of things like cyber stalking, cyber harassment and non-consensual pornography. So these are conducts that go well beyond just being mean online and have huge impacts on the people affected by them. And so that's where I've been focusing my work on, both as director of the International Human Rights Clinic. We have a cyber violence project where we work with people who have been subject to these kinds of cyber violence and we also do research on the topic. So that in a nutshell is the spectrum that we've been working with. So you're talking about the online manifestation of a problem that is probably as old as, you know, the human species, right? Which is that we are a social animal. We relate to each other in different ways and sometimes those relations are so suboptimal that we need to send it into violence. Is there something unique about the digital sphere that makes it more of a problem online? And do you think there's something, is it useful to think about digital incivility and interpersonal cyber violence as separate from the offline analogues to those problems? So what's different about it? Well, I do think there are substantial differences. I mean, in some aspects, it's just an extension of what happens in the real world. So cyber stalking is one such example, right? It's an online extension of stalking in the real world and you frequently have the physical with the digital stalking conducts combined. But then you have a phenomenon like cyber harassment and cyber mobs, right? That generate at a pace and at an intensity that just you wouldn't have in the same way in the real world. And so I do think that there is a qualitative difference in the type of harms that can be generated in the digital realm. And, you know, non-concentral pornography as well, the range and scope of the conduct in the audience, it reaches in the harm that then is effective is I think qualitatively different. And then the other thing is that just the nature of online communication, right? It's done in a way that has no context as we've all, you know, I'm sure read and studied. Which means that things that may seem innocuous set in the real world with a smile come across as offensive or hurtful on a screen. And then when you compound that with things you're getting from other sources, right? Because the way things kind of replicate and get bounced around and forwarded and so forth, the multiplier effect contributes, I think, to whatever negative feelings might be generated by a particular conduct. So on both these levels, you know, you have really an entirely different kind of dynamic. And the harm, you know, I keep coming back to that, you know, victims of the most aggravated forms of cyber violence go through just horrendous personal experiences, you know, depression, PTSD, post-traumatic stress syndrome, you know, they, you know, so they end up either reducing their presence online or completely cutting it off. And that itself nowadays has enormous implications. So I think we're talking about something very different from what happens in the real world, which is bad enough, right? So you have, I mean, a lot of people are obviously talking about all of these issues, but you've actually done some work or, you know, reported on some of the work that others have done to quantify just the extent of this problem. So maybe it might be useful to share that. Well, so there have been a number of studies done on different aspects of cyber violence. Generally, Pew Research Center has a study from 2014, which you can see online, which reported that 73% of people online have experienced some form of cyber harassment, seen it or experienced it. And 40% claim to have personally been an object of some form of cyber harassment. You know, depending on, there are a number of studies that pretty conclusively established that women are disproportionately the objects of cyber violence and harassment in the way that I've been defining it. And among them, the 18 to 24 age group is that which is most susceptible. So young women in particular are, tend to be the objects of this kind of conduct. But it's not exclusively limited to women. When we started our cyber violence project, we opened our doors to clients from the community, and the first two clients to walk in were two gay men. One who had been impersonated on a gay dating app, Grinder, and the other reporting revenge porn from a former lover. So at that point, we had to reevaluate our focus, which was initially on violence against women online, and we made it instead gender-related violence, cyber violence that is the focus of our project, and we renamed it for that reason. So these statistics are helpful to show generally that it's a very pervasive conduct. I mentioned, you know, it tends to happen predominantly on social media sites. So 66% of the folks who reported seeing or experiencing online harassment in the Pew study said that they experienced it on social media. And within social media, Facebook is the leading platform. Twitter also, and so forth. One statistical curiosity that I find is that the numbers of people who report seeing or experiencing online harassment is roughly double those who report having suffered it themselves. So for example, in the Pew study, 18% of folks reported having seen somebody stalked online, you know, singled out and harassed repeatedly, but only eight reported that they themselves had been stalked, right? And I cited the original numbers that say about 73 see something or experience something online, but only 40 say it's happened to them. And so I think what that suggests is a couple of things. One is that people are more likely to talk about what they see happening with others than what actually happens to themselves. I think there's a huge under-reporting problem, and there are probably good reasons for that. And the other thing is, even if the under-reporting numbers are true, it's still an enormous number of people that are being impacted, right? So in preparation for this talk, I did some math, which I normally avoid at all costs, but it turns out that close to 75% of the US population is online, right? So that's about 240 million people. And so if we take the number of 8% of people who report themselves having been objects of some stocking, right? You're talking about close to 20 million people. And I don't think we get 20 million cases of cyber-stocking reported in the country. I'm not 100% sure of that, but I'm pretty sure it's not that high. So this brings me back to the digital civility piece, right? Awareness that this is actually a problem is, I think, a first goal for advocates, those of us who want to advocate for a better Internet, right? And because especially among the generation that is most affected, there's this sense, and I know this from my own students, right? This is just stuff that happens online, right? And people are mean to each other online, but you just block them and you move on, right? But at what point does that contribute to a culture that permits or facilitates that type of conduct moving forward? And so I think that raises interesting questions about what do we do about this, right? So let's stick with just the digital civility in civility piece. What can we do specifically about that? We'll abstract away the more serious forms. So what are the measures? And maybe we can start with, you know, you mentioned the a-contextual nature of digital technology has long been shown to, you know, promote flaming and trolling and all these activities that in conventional social interaction, face-to-face, are, you know, not non-existent but rarer. So why don't we start with thinking about design principles? So are there things that we can do? And these are supposed to be social media sites, which are where most of this is happening, right? Can we change the architecture of these sites to reduce the prevalence? Well, I'm not sure I would start with the architecture necessarily. I think the response has to be what others have called a multimodal approach, right? Different modalities of regulation. So this is Jacqueline Lipton building on Lawrence Lessig's framework for regulating online conduct. And so there are different parts to that, right? And architecture is just one of them, and that's the part that I'm probably least familiar with myself. So I might leave that to some of the experts in the room to see what could be done on that end. But the conduct of private institutions, the role of public education and the creation of new social norms in particular, I think are what we as advocates need to be focusing on. And in particular, this idea of promoting the digital civility, right? The digital golden rule is an important approach, right? And so I'll talk a little bit about, you know, Microsoft did a global study of online safety perceptions, right, in 14 countries. And, you know, determined that there was this sense, you know, that harassment was happening, that people were being subjected to these offensive conduct and their response, their suggested reply was, well, you know, let's promote the digital civility principles and they have an actual code, right? And the first, you know, rule of that code is the digital golden rule, right? Be aware of what your presence online, how it's being felt, be respectful of differences and stand up if it's safe to do so for yourself and for others, right? So what they're trying to do as part of their digital online safety initiative is to help create new social norms from that particular sector. And my point is, well, I think that's a valuable initiative and it shouldn't be limited to the private sector, I think, right? And so in that vein, as part of the cyber violence project that we're doing at GW Law, we have an educational component, a public educational component. And what that consists of is a cyber violence curriculum that we have, my students and I have put together for university students, university age students, because the idea is to roll it out even beyond just institutions, right? And eventually we hope to get to high school age students. And it has, you know, so it has three components, right? The first is the awareness creation component. What is cyber violence? Why is this something that you should be concerned about? And that's a little bit what I was talking about before. Second is what can you do to prevent yourself from becoming, right, a victim of cyber violence? And finally, the third module is, well, you know, if you are unfortunately subject to some kind of serious cyber violence, well, what can you do about it? It's interesting because the Microsoft study that was done globally found that a substantial majority of the people surveyed said they had no idea where to turn if they were, you know, victimized in some way online. They just, people don't know what to do. And so I feel like that is an opportunity for advocates in terms of helping to create awareness and then promote prevention and then how do you deal with it when it happens? So I feel like the social norms, the public education piece is where there's a lot of opportunity. Yeah, would you actually share with us some of what's in that curriculum? I think that's really interesting just in terms of sticking with the prevention before we talk about responses as it happens. What are things that people can do? Because I think there isn't a lot of awareness about what would be the recommendations on what to do. Yeah, I know. Well, as it turns out, most of the social media platforms for all of them, which is where this conduct tends to occur, have measures that users can employ to protect themselves. First and foremost, your privacy settings. And I didn't realize this until we started having, you know, clients come in the door who had issues, you know, exposed because they didn't set their privacy configuration very well. And so then we were started to advise them on what they could do. And, you know, Facebook, for example, you can really fine-tune the privacy settings to the point of individual posts and individual users and contacts, right, friends? And that's not a coincidence. That's on purpose, right? The company has a very strong internal online safety initiative that is meant to address just this problem. So knowing that you have, right, these opportunities to fine-tune your own privacy settings, realizing that there is a need to, right, that's where the education part comes in. So that's one piece. The other piece is password protection. It's unbelievable. But every time you go to a digital hygiene or digital security talk, regardless of what the setting is, you know, I was in one in Eastern Europe on digital security for human rights organizations, the central, right, central points is protect your passwords. You know, don't use the same password over and over again. Don't be obvious about what passwords are. Change them regularly. I don't know. The statistics show that, I don't know, some 70 or 80% of breaches of privacy or security are often because of, you know, access to passwords that people are not taking care to protect. And these are, so these seem like straightforward responses and in large part they are. But I think that just having people realize that there's a problem that is requiring of these responses is kind of the part that I think we sometimes miss out on. And then once something happens, you know, then you resort to things like blocking or reporting procedures that platforms have that we talk a lot about. So I think in terms, you know, that is an important part of prevention. So we've kind of been sort of the universe of self-health measures, but now let's sort of descend on your spectrum to where something is happening. And that, in your definition, does implicate the law. So your definition of interpersonal cyber violence is something that is actionable under the law, if I'm not mistaken, right? Yes. So why don't we talk about, I mean, you've done some really interesting work on discovering some civil remedies that people can access short of going to the criminal system, which we'll discuss as well. But so what are some of the things that people can do using the tort system or the sort of civil liability system? And that's a country-by-country thing that we'll stick with the U.S. for now. Right. So when you move into the legal realm, right, I just want to highlight that for most clients, just getting stuff taken down or harassment to stop is their priority. So the non-legal company interface piece is very important that we just talked about. But once you get into a situation where you need to employ legal measures, then there are a range of options. Obviously, there's civil and criminal. I talked about cyberstalking and cyberstalking statutes across the country have largely been either updated or supplemented to include online conduct as a medium for stalking. Revenge porn is another area where you have 35 states currently have revenge porn statutes. Laws that outlaw non-consensual posting of intimate or sexual conduct or images, I should say. But at least in the practice that we have, so the cyber violence project that I run, I run in conjunction with another clinic, the Family Justice Litigation Clinic. And the reason is because that's the clinic that does domestic violence cases at our law school. And they were seeing a number of cases coming in that had a cyber-abuse component. And so when I approached Lori Cohen, my colleague about working on this directly, she says, absolutely, we have this coming up more and more, and so how can we work together to help the clients here? That clinic focuses largely on securing civil protection orders for their clients. They were doing this before there was a problem with cyber-abuse, right? But what we have discovered in the past two years that we've been doing this is that with some advocacy and with some guidance for judges, you can get a protection order that includes language about not communicating online media and basically covers all the permutations that you would want to cover to prevent further harassment online. And this is not something that has not been really explored in the literature or in much practice as far as I know. I think that there are some other folks who are starting to work in this way, right? But the short of it is that we believe that in most states have some version of the civil protection order that we use in D.C., right? It's not a criminal standard. You don't have to prove that like stocking is happening. You just have to make a prima facie case that it's happening to get the order and then the order will include all these measures that would be directed at ceasing the conduct. We're talking about something that I guess is more popularly known as a restraining order, right? Is that what we would call it for laypeople? I mean, it's the same idea, right? If you're a victim of stocking in the real world, you tell the person, you know, do not come within X feet of me or contact me using the phone or whatever, and it's just the application of that to the digital realm. Yeah, my understanding is the civil protection order is what we call it in D.C., but I think it works as a restraining order. And they're very open as to the content. You know, you can submit to the judge, you know, your list of measures and as long as the judge approves or the other side can sense which sometimes happens, then you, you know, don't go, you know, don't access this person's Facebook, don't post, you know, stay away from them electronically. Anyway, you can get very specific. And, but then there are first amendment concerns that come up, right? Which is sort of the counterweight that we're now starting to bump up into. If somebody on the opposite side of a CPO action that, like, you know, that is, say, the person who was harassing wanted to contest one of these orders on first amendment grounds, they clearly definitely could. And so we're kind of trying to anticipate that and think about how you get language into these and see the effect of curtailing cyber harassment or stalking, but would be, if not first amendment proof at least, resistant. So, I mean, first amendment is one issue. We talked about platforms that we're going to sort of assume are generally socially responsible and want to do the right thing. But of course a lot of this behavior happens on the darker fringes of the internet, you know, websites and other actors who are not responsible and, you know, are going to claim various kinds of immunities, right? For example, section 230 of the communication is decent. Right. You can't get the websites that hang the range porn for my large unless they do something to it. Or, you know, copyright sometimes trotted out as an idea, but the copyright in the image usually bests on the person who took it. Well, but that has been used in a couple of cases. If the person took the image, him or herself, and it's on a U.S. server, then you can notice that. You can't even say take down notice. And that can work. In fact, we, you know, suggest doing that and would be willing to do that on a, you know, in a heartbeat if the opportunity arose. I think that's a, the problem is this stuff gets distributed so easily and far that once it's off, you know, not on the U.S. server, you can't really go there. So let's sort of complete the spectrum of exploration here and talk about the criminal side of things, which is probably, in some cases, unfortunately going to be where this goes. So what is the state of criminal liability and has the law caught up necessarily to deal with these new modalities of stalking and harassment? Well, I think the law, normally speaking, is in the process of catching up and it's done better in some areas than others, as I noted. But the real challenges are in law enforcement. So, for example, as I said, you know, most states now have versions of their physical stalking laws that will include online harassment or stalking or through electronic means. So that, you know, that piece is pretty well covered and people have, even prosecutors, you know, I think have come around to understanding that that is part of what has always been understood as stalking. Cyber harassment, right, if you look at conduct that is not always directed at the victim but is always about the victim and can cause serious distress, attended in laws, right? And there's a related problem which is that of terminology. So, you know, some states have cyber harassment statutes that are actually cyber stalking statutes and some have cyber stalking statutes that include harassment, which is right, doesn't require that it be directed at the victim as long as there's some kind of harm. And then there are all these different degrees of how much harm is required for it to be actionable. You know, one state requires physical harm to derive from the electronic harassment, right? So, when you start to move down from something as clear as cyber stalking, repeated conduct, directed at the victim, causing fear for safety and what, right? It gets harder. Then you get to the revenge porn area. As I was saying a moment ago, about a year and a half ago, there were 25 states with revenge porn statutes. Now there are 35, right? On the other hand, there's no federal revenge porn statute. There's a bill, a pending in Congress. I confess I need to learn more about it, but it doesn't seem to be moving. And so the books have those laws. The question then is how do you persuade the law enforcement to act on the stalking laws, the revenge porn laws, right? There are some federal laws that also, in theory, let you bring cases for interstate communications that harass or threaten. So the Interstate Communications Act, the Telephone Harassment Act, there are a handful of cases that use those. I think they're more theoretical as far as I can tell. But nothing that specifically says, here's a phenomenon, here's a problem that we need to, first we have to typify it, we have to define what we're talking about and then address each piece of it in the appropriate way, right? So that's the normative piece. But then it's all about the law enforcement. So we had a CPO action or restraining order action, my partner clinic where the client was complaining about intimate pictures of hers that the respondent had posted. And the judge said, well, you know, you should just be flattered that someone wants to take pictures of you because no one would want to take pictures of me naked. This is the judge. And the students were, they just were, you know, you can imagine what a teaching moment that was on every level. That just happened. Like this is from like last month. So, you know, and then I have another, I have another client who had a, you know, a very unfortunately unhappy break up with her former husband. And in the melee, you know, he basically stole her phone and her computer, hacked into them and stalked her through, you know, through her own accounts. And she reported that to the police and they wouldn't take, you know, wouldn't take either the stalking complaint or the theft complaint. So eventually the hardware was returned, right? It's part of the ongoing civil proceedings there. But they just wouldn't, they didn't just not believe her. They didn't think there was a problem, right? Ex-husband held onto the computer. What's the big deal, right? Meanwhile, he compiles a dossier on her that had full GPS tracking of where she had been, of who she had been with, pictures from her accounts. And she had all that, you know. So the real struggle, part of it is normative in getting that where it needs to be. But it's getting the law enforcement authorities to really capture, understand that there's a problem and that they have a role in dealing with it. And there are many, many other examples, you know, that you can read about online that are similar to that. The thing about law enforcement is that, you know, ultimately the sanction of the criminal law is going to be based on their resources. And I mean, the scale of the problem that you're describing sounds that, like, we have to prioritize measures that are outside the law, because if there are 8 million people or 20 million people who are being harassed every year, I mean, I don't know how many criminal prosecutions are in the United States, but probably, you know, it's probably not, for all offenses, probably in the tens of millions, right? So this would overwhelm the system. So maybe on that happy note, we should open it up. I have an option just, however. Yeah, both to questions and ideas and suggestions, you know, on ways of addressing some of the problems that have come to light here. Hi, my name is Ron Newman. I'm going to ask this from a somewhat unusual viewpoint. One thing I didn't hear you mention anywhere during your talk was libel and defamation. And I was actually on the receiving end of a civil suit on that subject, which we basically managed with some help from somebody at Berkman to kill off with CDA 230. But I want to ask, you know, what is the role of libel and defamation law in this problem? It definitely has a role. It's part of the civil remedies that in extreme cases you can obviously bring to bear for your client. Depending, you know, if in fact the statements that are being broadcast online are untrue and harmful, then yes, absolutely. That would be something that we would consider as a remedy. No doubt. So how about CDA 230? Obviously there's a barrier to that, you know. Well, this would be against actual individuals identified as posters of content, like, you know, persons, right? So it wouldn't be like going actual host or anything intermediaries. Can't touch them, you know, to paraphrase the song from the 80s, right? But there have been some cases brought successfully, I think some settle under libel defamation in harassment situations, you know, where you're able to identify the source, like the individual who posted this, the foundatory content. And so it's, you know, we haven't actually used it yet, but I'd be, you know, I'm looking forward to doing that at some point. Very much a remedy. Hi, I'm Sarah Grant, I'm a 1L. One of the things that seems to me somewhat obvious is that in cyberspace you can be stopped from a foreign country. And, you know, physical, you cannot be physically stopped from a foreign country very easily. So what, if we are looking at the legal side of things, either civil or criminal remedies, is any of that available if you are being cyber-stalked or cyber-arrested from a foreign country? Because at least my understanding is we don't have the international conventions that would apply to these things, so that, like, ATIs or anything else would apply. Yeah, no, that's a great observation and you're exactly right. So this is still playing out very much at the local level, at the national level, in the way that, you know, states and authorities try to address it, but it has an international transnational dimension that you put your finger on, which is hugely important. And so the first part of the response or answer would be the companies themselves operate globally. So depending on how the stalking takes place, let's say it takes place on Facebook, and even if it's coming from a different jurisdiction, you know, Facebook will have that covered and the takedown notice and the reporting procedures that by and large, I understand that they apply effectively are available. On the other hand, you know, I had a case where our client would have been, was a woman in a different country who was the object of a sex torsion. So her former partner was threatening to publish, you know, sexual images if she didn't pay him a certain amount of money. And she was, you know, somewhere in Europe and he was somewhere in the U.S. So if he had been the other way around, there's like nothing we could have really done about it without talking to lawyers or authorities in that other country. In this case, we were able to do some research into what remedies would have been available since the person threatening the criminal act was in the U.S. territory. But it points up some very, very challenging aspects of this that the surface is only being scratched right now on that. So it's kind of even realizing that it's an issue at all, locally, right? But you're absolutely right. At any given time, it could implicate people from anywhere in the world. Talking about this, are you going to go out? My question builds a little bit off of these three issues, but you talked about, you know, when Beth asked you what elements of this are different online, you talked about the very rapid potential buildup of mass harassment. There are many different actors on one or many platforms. And I'm wondering, have you thought about the legal remedies for those sort of situations where you can't really pinpoint a one actor who's called, I mean, there are many of them. There may be a ringleader, but there are, you know, maybe dozens or even 100 people involved in this and there what do you think is the, you know, what is the role of the platform that, I know they've been doing stuff, but are there other things that platforms could be doing in conjunction with or separate from legal remedies? Yes. Well, the cyber mob phenomenon is one of the big challenges. And the other, of course, is the anonymity, right? And I think by and large, most of us would say anonymity online is not a bad thing. It might have to have its limits in extreme cases, and some of these are in fact extreme cases that we're talking about. But generally speaking, you know, I think it's an important concept. So, but it is what it is, right? It's an obstacle if you want to pursue through, you know, libel or defamation actions or through some other kind of legal action. You've got to identify the persons behind this harassment online. And, you know, that requires some sophisticated forensics most of the time if they're hiding behind anonymity or, you know, other measures. And that's, you know, the kind of capacity that law enforcement should have, and some do have now. That's another problem. So, you know, I think trying to figure out how to get more of that forensic capacity into the hands of advocates who can break through, right, the anonymity and anonymity and be able to go after some of this, some of the perpetrators legally is a challenge. I mean, we had this client I was telling you about that had her hardware taken from her ex-husband, from her husband, when she got it back, she's like, we need your help to, you know, break this open and get inside and see all the evidence of him stalking us, stalking me. And, you know, my answer was, I can't do that. You know, the police that you reported this to who aren't paying attention to you, right, the crime unit of the D.C. Police Department, they're the ones that should be doing that, but they weren't. And so I felt really, you know, powerless on that technical front, right? And I want to be able to turn to somebody and say, you know, maybe I need to call the Access People or something. I don't know. But that's a big challenge. You asked about, you know, what can companies do more? You know, I actually think that the social media platforms are doing a pretty bang-up job, not because, you know, it's not pure altruism. It's like this is their business model. It depends on people feeling safe online in their space, right? And they have taken enormous strides and, you know, to adopt firm policies on things like revenge porn, so like Google and Bing will, you know, will delink that from their searches. Community standards that have been crafted to take into account conduct-directed individuals that's meant to shame or humiliate them, right? So that will be taken down if you can show that. And, you know, I was at a meeting with a bunch of advocates talking about this and they were complaining that companies don't do enough and could do more. And I said, well, actually, from what I've seen, you know, with Vivek, I'm a member of the Global Network Initiative and I've seen from the inside what goes on and I really feel like they're doing a lot. Could they do more? Probably. But from what I've seen, they're very, very sensitive. Talking now about companies like Microsoft, like Google, like Facebook, right? Twitter, I don't have as much information on, although I do know that they do take down offensive tweets and close accounts under certain circumstances. So there's probably more that can be done there. I, you know, as someone looking at it in this way, I feel like they're doing a lot. But it's got to come, you know, there have to be multiple, it has to be a multimodal approach. It can't just be the companies, it can't just be law enforcement. It can't just be us. Definitely can't just be us. You mentioned, if you do get a takedown order on a US-based server, probably for the larger, well-known ones, it's fairly straightforward the ones you mentioned earlier. But what about something that's like a more obscure US-based server than hosted someplace and how difficult or successful are you with that? And when you said call the access people, I don't know who the access people are. I'm sorry, access now is an NGO that has, that works on Internet freedom issues and they have a very, they have like a help desk for digital security issues that I just realized now as I was giving the answer to the last question. But really the point I was just making there is that it would be helpful to have access to people who could do this kind of forensics and I'm not even sure they can. So I haven't actually used the DMCA procedures yet. We've just researched them and we know that they're a remedy that can be evoked. So, and I'm not a copyright lawyer myself. So I don't know, as a practical, I know that it has worked in a case or two that I... Any kind of civil action on a mom and pop hosting place, you know, that maybe some guy's done it on some US-based server, not on a Twitter or, I mean, not on a Facebook, but something else, you know. Yeah, I... Yeah, hmm. The systems people might not be around. Even in universities or small colleges, you might not know how to get a sysad. I see your point. You're talking about sort of how this gets atomized and anyone can basically host content, right? And so it doesn't have to be a big server. Yeah. That's a good question. I had not thought about that. But what I can say is I haven't seen that come up yet as a problem. I mean, again, I think it would come down to just being able to identify the person. And once you could, I don't see any reason why you wouldn't be able to go after them civilly, either for libel, for slander, or I don't know about copyright if they were hosting images. I don't know. But it's a very good question. I'll have to think about that some more. I'll have a curious question. You know, I think one of the issues that Cura was pointing to is this sort of traditional cybersecurity issue or an attribution, which is a hard one, in anything happening online. But there's also the issue of intent. And you've explored this in your paper, right? Because especially the mob phenomenon, the person who is perhaps instigating the mob knows what they're doing. But it's often the case that people in the mob are unwittingly along for the ride. And when the online mob is something as easy as a retweet, and you don't really understand what's going on, what do we do about that problem? Is that an education problem? Is it a legal problem? I do think it's partially a problem of a culture of digital civility, in part. I think there's definitely an issue of just malevolence. I think it's hard to know where that line is drawn. I remember seeing a report. I can't remember if it was the New Yorker about a journalist who confronted an online troll. So she agreed to meet with him. And she had stalked him and said horrible things and death threats. And she knew where he was doing his book talk and stuff. All the horrible stuff that we're talking about. And then she agreed to meet with him, and they're talking. And he brings up this horrible content that she'd been posting about him and at him. She's like, oh, I was never going to do any of that. She's like, we're basically all cowards. Those of us who do this. So it's that disconnect between what comes across, which looks and feels exactly what it looks like, to what the intent behind it might be. I'm not trying to excuse any of that conduct. But it was an insight into the psychology of the online stalkers, harassers, like the worst ones, that suggested it's much more complicated than it might seem. Then there was, of course, the Alanis case in the Supreme Court, the Facebook rap lyrics. Did you guys see that? So a guy was convicted of, I think it was attempted murder or a crime based on the fact that he had put lyrics on his Facebook page, posted them that were about his former girlfriend and were explicitly and graphically describing how he was going to dismember her and chop her up into little pieces and do all horrible kinds of things. And she was, of course, scared witless. And so he was convicted of the criminal offense of, I can't remember exactly what the offense was, I think it was intent or attempted or something. It was maybe a threatening. Yeah, that threats, yeah. And it goes up to the Supreme Court and there's a big First Amendment issue there that everyone was kind of paying attention to. But they don't even get to that. They say, you know what? They overturn the conviction. They say, as bad as this looks, there's no way you can actually find, based on what happened below, that he had the intent to do any of this. Just these lyrics in this way, you can't prove that the intent was not proven because the crime required intent, they overturned it. They didn't even get to the First Amendment issue. So again, you're asking about how do we know what the intent is? What does it mean? What can you do about it? Very complex, very nuanced and difficult to get at, right? I don't know if that answers your question, but it provides some context, yeah. My question is about laws that do exist. I guess, you know, in situations where intent is less important, such as, like, revenge porn, 35 states have laws, but the other 15 don't. So if you're, and for a lot of other forms of cyber violence, if you're resulting to things like DCMA or libel to get what you want, to me that suggests there's a huge, that's a huge gap in policies and regulations that exist. So I wonder, you know, what kind of education might be necessary for policy makers or how you kind of galvanize getting laws to reflect where we are in society with technology today? Well, I mean, there are a number of initiatives, right, that are meant to address that gap at the state level in laws against revenge porn. So is that what you're asking? Is it porn, right, or something else? Well, there, so there's one way of looking at it is to say, well, we need a federal law, right? And so the bill for a federal revenge porn law, I mean, leaving aside the enforcement issues that come even with that, is one attempt to sort of make this more uniform. Folks have been working on getting more states to adopt these laws and have them be more consistent, right? So the Cyber Civil Rights Initiative, Marianne Franks at Miami and Danielle Citrone at Maryland have done a lot of work in this area. And they're largely the engines behind states, this increase in state protection, right? So I think it answers your question more directly. Educating policy makers that there's a problem is almost certainly something that needs to happen. And I talked about the importance of public education. I talked about creating awareness publicly among people who might be affected or might be perpetrators of cyber violence. But I realize now that we probably should be directing some of this at policy makers as well. Might need to add that to our agenda. Because you're right, a lot of this comes back to people just don't see it's a problem or don't understand how much of a problem it is. That's really the, I think, the impediment to getting better legal responses and better enforcement. That does answer your question, right? So in terms of this sort of raising the awareness of the problem and maybe also just a bit on the educating of the judges and others, I'm fascinated by these cases where you have people who will, with intent, tweet at a person who they know to have, what's the disease, epilepsy, they'll tweet a flashing gift or a flashing video to cause physical harm. That newsweek reporter, yeah. There's been a couple, there's been that one case recently and a couple of cases in the past. I'm wondering, I mean, as terrible as that is, if that's a helpful kind of case in terms of kind of highlighting that this kind of online... How level it can become, yeah. That was actually a salt, right? Exactly, yeah. Oh yeah, that sounds right, yeah. I wasn't aware of that case, but yes. Well, so there are extreme cases like that. Other extreme cases are like the swatting cases, right? You all are familiar with swatting, where somebody reports some kind of violent activity at the target's home, maybe a terrorist, a alleged terrorist or weapons or something, explosives, and then the local SWAT team gets called in and they just surround the house and these unsuspecting folks inside get swatted. The door gets kicked in and it's the SWAT team and they're just sitting there, right? And this happens pretty frequently. So there are extreme manifestations of this that I do think help draw attention to the problem somewhat. I think that's part of what we try to do in our own education curriculum. We use the Gamergate example, Zoe Quinn, because our target audience are 18 to 24-year-olds and so we use her to get their attention. In her case has everything in it, right? Stocking and harassment and wrench porn. So I agree with you, I think that is part of it, but I think we just need more consistent and broader efforts, broader coalitions. Right now, there aren't a lot of folks really looking at this this way. And there's a dismissiveness still in the authorities, not all, but some that make it hard. So I think that's helpful, yeah. So what do you think are the points of leverage that are available to people in this room, right? I'm going to leave this talk as people who are concerned about these issues and wanting a more perfect internet. What do you suggest that we do? Is it a matter of private virtue online or is it really focused public policy advocacy or what do you think is the right approach? Well, I think it does start with one's own conduct online and mindfulness. One thing that grew out of the work that we did was for me an insight was you can inadvertently be someone that's being uncivil or causing harm online without even realizing it. And how do you make yourself aware of that and avoid it? And that's the mindfulness piece, right? And we built that into our curriculum because we realize with my students that a lot of people don't view themselves, most people don't view themselves as harmful actors online, until they are. So I think it starts there. You know, at least as an educator and an advocate, I think the public education piece and the forging of new social norms are really where we need to engage. And I think there's no substitute for engagement. And this is where organizations like Access and others that we know have started to get the ball rolling. But at least on the cyber violence front, you still don't have that kind of coalition or broad network or well articulated movement that needs to happen before you start getting through to, I think, policymakers and others, that this is a real problem, that it has a shape, that it has parameters, that it can be addressed, that it has to be addressed. So I would start with that idea of how do I contribute to building out new social norms and, you know, promoting awareness and engagement. Of course, Rob. First of all, thanks for the great talk. And I was really glad to say you're optimistic at the end of what was otherwise pretty depressing. But I'm curious about the notion of teaching people to be better people online and mindfulness. And if we're to overly simplify this, there's people being jerks online who know they're being jerks, and there's then inadvertent jerks. Do we have any idea what the proportions are? What kind of people might be susceptible to a strong nudge in the right direction? I don't know. I mean, I take the view that most people online are probably not the jerks, the knowing jerks, right? Or even the inadvertent jerks. I think most people online, and this is just purely impressionistic. I don't have any numbers on this. I'm not sure that, you know, other than just looking at sheer numbers of people online, right? We say, what, 240 million Americans, right? 75% of the population online. So I think you want to assume that most of the people in your audience are in that category of folks who at least are not intentionally are being jerks, right? And maybe most of them are trying to avoid that. For the ones who perhaps are jerks or knowingly so, or, you know, that's where the educational piece that talks about consequences or potential consequences I think is important. This is an imperfect model, right? I admit. But so if I'm a college student and our curriculum is called pulling the plug on cyber violence, right? So we show that there's this problem. If I'm someone who I know has engaged in some of these conducts, I've trolled a bit, anonymously critiqued someone in personal terms, or I've been a jerk online. And I start, and I see, right, this panorama presented and I see the harm that it does to people. I see examples of that. I see what consequences might come from it, even if they're only theoretical still in some, I'm hoping that some people will get a new perspective on their own conduct, right? I mean, I think that's the way that I would approach that. And then, of course, there's always going to be a group that isn't affected by anything. They're always going to do this no matter what. They're going to find ways to get around, even if the norms evolve and get to where they should be. Even if we start to get enforcement, they're still going to have people who are going to do swatting and who are going to do horrible things to other folks online, just as they do it offline. So, but I just feel like there's an awareness dimension that has not really even been fully activated yet that would reach a lot of people and affect conduct. And I come from a hard-edged accountability background, right? I mean, I'm about legal accountability and so for me to get to the point now where I'm actually advocating, shaping new social attitudes and norms is something that surprises me, first of all. But I don't see any other way of approaching this. It's about culture. It's about creating new practices and awareness. This is just restating the question earlier. Not really restating it anyway. Sure. I mean, if the intent is to harm, if the intent is a retribution or to destroy. It doesn't help to show people how much they harm other people by their actions. That's just what they want. That's right. So, there's got to be another way. Well, that's where I think the law should come in. That's where the justice system should operate, right? And that's the other challenge that we were talking about, that it's not at that level yet normatively and certainly not in terms of enforcement. So that's the complementary piece, right? For those who are always going to be malevolent, then you need to have some legal remedies or else it won't work. Will you give me the last word, Jack? Oh, no. Pressure. I hope you give me the last word, actually. Otherwise, like... Audience, please make it profound. Okay, so my name is Filippo. I'm a second year here. I've been reading a lot lately about how the environment on cyber and on the internet is very different than the real world. And some of the arguments advancing are that the laws that we have in the real world cannot be used to combat the problems in the cyber world because it's just a very different dynamic. And I was just wondering what your thoughts are in terms of that difference between the problems on cyber. Are they just an extension of what we experience in real world? Or are they fundamentally different in some way? I'd say yes, no, and neither. There's some truth to both of those perspectives, right? I mean, let's take human rights, for instance, which is the background that I have. And there was a debate back a few years ago about whether we needed a whole new system of norms to deal with online conduct framework that was already in place for human rights can and should apply and would be effective. And that came out in the second scenario, right? So it's now firmly established that rights apply online as well as in the same way that they apply offline. And then there are interpretations that say, and these rights are adapted and interpreted in ways that are consistent with their underlying values but that take into account technological dimension, right? So because there are new media will create new issues and so you just have to adapt, right? And I think that has worked by and large well, although not fully. I think the cyber violence area is one that poses in some ways new challenges, in some way, it's just the same thing happening online, like the stalking example. But some of this stuff, I think, because of the reasons we were talking earlier about how the way that it can escalate, right? And also the way that the harms are generated and the subjectivity. I think there's dimensions to it that make it something different than what you could ever do in the physical world for the first point or that you could address with existing tools. You need some new tools. So it's actually a bit of both. And I think that's how the human rights framework has evolved to address this particular phenomenon. Does that answer your question? Keep it quick because we're at the top of the hour. I'm taking over Dwayne Mitchell's Apple Script business and I do specifically Apple Scripting and GUI Apple Scripting, which is scripting unscriptable. And I can write malware to screw up your Macintosh six ways to Sunday with the GUI scripting and tiny 2K applications that you can put in a Word document that would, if I'd like to... Is there a question at the end of that? No. Because we're really at the top of the hour. No, I know. I'm just saying that if I could do it and drop one in at some cartel banking facility, they wouldn't know what hit them. I'm just saying that I'm sure there are people who know Dwayne and Dwayne was regular here and I just wanted people to know that that's available script type. Okay, perfect. Okay, thank you very much. Well, thank you to Arturo for... Thank you. Very insightful presentation. Thank you.