 So this is our second panel of the day. It will be discussing building and maintaining online communities, anonymity, defamation, privacy, oh my. And I'm going to hold about 20 minutes at the end, probably now a little less than that, for Q&A. And I'm David Artie. I direct the citizen media law project. I give brief introduction this morning about the online media legal network. And there's a bio of mine in the program, so I'm not going to spend any time telling you about myself. And so I'm going to tell you about this great panel that we have lined up today. I will start with Pat Carome was on my left. He's a partner at Wilmer Hale. Pat has handled a wide variety of high-stakes litigation in both trial and appellate courts. He devotes much of his time to representing reputation of communications and media companies and complex litigation and counseling matters. In recent years, many of his clients have been leading internet companies. And I should add that Pat represented America online in one of the most important internet cases ever decided, at least in my opinion. And I imagine that Eric shares my opinion. And the Zerrin VAOL case involved in the application of Section 230 that is cited now almost 15 years later quite often. It was a great win for Pat. Pat's areas of substantive expertise include defamation, privacy, copyright, trademark, press freedoms, trade secrets, general tort and contract law. He's represented a broad range of clients on these and other issues, including, before mentioned AOL, Time Warner, Craigslist, eBay, Google, Yahoo, The Washington Post, The New York Times, The Los Angeles Times, ABC, CNN, and the American Legacy Foundation, which I've never actually heard of. Bill Densmore, on his left, is the director and editor of the Media Giraffe Project at the University of Massachusetts at Amherst and the New England News Forum, a career journalist. Bill has been an editor and writer for the Associated Press, for trade associations, publications in business, law and insurance, and a freelancer for general circulation dailies, including the Boston Globe. In 1993, after nine years of owning and publishing weeklies in Berkshire County Mass, Bill formed what became Clickshare Services Corp, which provides user registration, authentication, and transaction handling for internet web content. Bill has also served as an advertising director for a small, group-owned daily and as interim director for the not-for-profit Hancock Shaker Village. He has taught and lectured in journalism at Massachusetts College of Liberal Arts in North Adams Mass, and was a director of the Action Coalition for Media Education. Eric Goldman, on his left, is associate professor of law and director of the High Tech Law Institute at Santa Clara University School of Law. Before he became a full-time academic in 2002, Eric practiced internet law for eight years in Silicon Valley. His research and teaching focuses on internet, IP, and marketing law topics. And he blogs on these topics at the hugely influential technology and marketing law blog. On his left is Jeff Howe. Jeff is a contributing editor at Wired Magazine, where he covers the media and entertainment industries, among other subjects. In June of 2006, he published The Rise of Crowdsourcing in Wired. He has continued to cover the phenomenon in his blog at crowdsourcing.com and published a book on the subject for a crowned book since September of 2008, which I meant to bring and hold up so that we could plug it. Sorry, Jeff. Before coming to Wired, Jeff was a senior editor at inside.com and a writer at The Village Voice. He is 15 years as a journalist. He has traveled around the world working on stories ranging from the impending water crisis in Central Asia to the implications of gene patenting. He has written for Time Magazine, US News and World Report, The Washington Post, Mother Jones, and numerous other publications. Barbara Wall, on his left, is Vice President and Senior Associate General Counsel of Gannett Company, where she advises Gannett's many newspapers, broadcast stations, and websites on a variety of issues, including intellectual property, ethics, privacy, and libel. Barbara joined Gannett in 1985. From 1979 to 1985, she practiced law in New York City with the firm of Satterley and Stevens. She serves as Chair of the Newspaper Association of America's Legal Affairs Committee and is on the faculty for the Practicing Law Institute's annual communications law program. She also sits on the boards of the Media Institute in Washington, DC, the District of Columbia's Council for Court Excellence, and the Reynolds Center for Courts and Media at the University of Nevada. Since 2007, she has taught communications law as an adjunct professor at both George Washington University and American University. So thank you panelists for being here today. We're going to start by asking a couple of, we've already planned out the first couple of questions as a way to place this topic in a little bit of context and to understand some background. We'll then dive into the conversation. It'll be much more free form at that point. And like I said, I'll try to hold 20 minutes for questions. So the first thing that I'm going to ask Jeff how this is, building and managing online communities, why should news organizations be interested in building online communities? What's the benefit to doing that? Well, I think that, I mean, first, it builds probably better poise to address news itself. But if news organizations or anything like any other company, and I'm pretty sure they are, then it behooves any company to build a better, more robust partnership with their customers. And I actually think that it's even more the case when it comes to the newspaper, which has even more to learn from its readers than a standard widget company, say, may have to learn from its customers. But generally, what I've been covering for the last four years in crowdsourcing is just seeing the extent to which customers are often more knowledgeable about a product than at least have very valuable additions to make to the process that the companies engage in. So Bill, tell us some of the things that news organizations are doing today to build online communities. Yeah, I spent a year at the Reynolds Journalism Institute at the University of Missouri thinking about a concept that we titled in one conference from Gatekeeper at Information Valet. Josh was talking about how the role of news organizations is changing from that of the pinch point that controls what we all see to one of taking an economy that's just overloaded with information and trying to make some sense out of it, that sort of info valet function. And so as they do that, they've had to learn new ways of relating to the public. And I think they've been doing that. There's six ways where I think we are seeing that in the web today. The first place where it started to show up was newspaper websites allowing comments. And even though that's the thing that's been around the longest, as I think Barbara mentioned when we had a pre phone call the other day about this, it's probably the biggest legal headache that she faces. And that's echoed by, I sent by the way a post out on one of the listservs that we work on to folks yesterday asking them some questions about this panel. And one of the people from Media News Group got back and said exactly the same thing Barbara said, by far the biggest legal issues they face on a day-to-day basis have to do with comments. So comments were the earliest thing, but they're still the most problematical thing. The other way that news organizations started to reach out was to invite user-generated content, photos and video. And then I think you started to see the light bulbs going off with news organizations like Gannett and Tribune and McClatchy when they bought a controlling stake in topics.net probably eight or 10 years ago now. Topics.net is an aggregator that goes out and scrapes websites for local news and then presents it and allows people to comment on it. So here you sort of these news organizations start to say, okay, I guess we've got to allow the aggregation world into what we're doing. The next thing that they've started to do was to permit their own reporters to blog. So again, starting to realize that the roles are changing and that raises actually some interesting legal questions and that is if a reporter says something on their blog are the same standards of accuracy and fairness and libel and slander applied there as would be applied if the thing was in print. And then what's starting to happen now and Josh mentioned this also in his talk at lunch is newspapers are starting to realize that they can get into the aggregation game of sales. About a few weeks ago, I think the Sacramento Bee launched something called Sacramento, I forget what it's called, but they have started their own aggregation site where they are finding all of the local news in Sacramento and presenting it. So they've realized they have to be in that game. And then the last thing, which is the most interesting, I think, and really nobody's doing much of yet is what they started at Minnesota Public Radio a number of years ago called Public Insight Journalism where you actually invite the public in to contribute to your reporting. So it might strike some people in this room as strange that a news organization would open up its site to publish the content of third parties that they have little control over. They don't know what they're gonna say or do. And as lawyers, especially those of us who counseled journalism organizations that weren't online, that really doesn't happen. Why is it happening online? And give us some context. I'm gonna direct this to Eric Goldman who knows it's coming. What's the legal regime around these online communities that's allowing this kind of innovation that Bill described to happen? So in my world, I break up content into two buckets. First party content and third party content. It's really clean and easy to stay. All you law students love those bright line tests. So if you can fit things into first party content, all the traditional rules apply that you're familiar with. We worry about, for example, defamation standards and whether or not there's some adjustment to some type of strict liability or negligence standard for defamation. Nothing new to really say about that when it comes to online publication of first party content. The more interesting aspect comes up when we're dealing with third party content. And in the media industry, we deal with third party content in a whole host of different ways. There's things that are very traditional, things like using freelancers or picking up stories from wire services, all of which might be characterized as third party content in my world to allowing users to have their say online. Things like providing places where users can publish letters to the editor without having to wait for it to be filtered by the editor or providing comments to stories where they're simply thrown open and allow users can attach their comments directly to a particular piece of first party content. And the rules I think mostly cleave between first party content and third party content. If we're dealing with third party content, the overarching law that applies online is 47 USC 230. There's no offline analog to it that I can point to. It says, sweet generous law for the internet that says websites aren't liable for third party content. Now, a lot of you are gonna start to say, well come on, tell me more. There's gotta be more to that story. And there is a little bit more, but not much. For the most part, if you can characterize content being published online as third party content, 47 USC 230 says you're not liable for it. So when we deal with things like online comments or submitted by users, if they're characterized as third party content, the website allowing those users to have their say aren't liable for it. Now let's talk about the exceptions, because there are a few. This does not apply to intellectual property claims. So if users submit copyrighted material, there are separate ways to deal with that. The fundamental Safe Harbor is 17 USC 512, which allows websites to offload certain liability for copyright submissions under a notice and takedown regime. Basically, so long as they respond to notices, then they can avoid significant liability. There are some other ways they should probably issues come up in user comments, but they're actually quite minimal. There are two other exceptions to 230. They involve the Electron Communications Privacy Act and state law equivalents, nothing that I worry about. And they do not apply to federal criminal law. So if a user uploads child pornography, for example, 230 doesn't apply and we have to go to other legal principles. But when we're dealing with user content or even potentially freelancer or wire service content that's being published by a media organization online, the basic rule is the media publisher isn't liable for the third party content. And so as a result, when I was thinking about this, I was trying to think, we talked about in our pre-call the risks of legal liability for user comments. And I couldn't think of a single case that I've seen reach a final judgment where a media publisher has been liable for user-supplied comments. Between 230 and the gaps that it leaves, which are pretty small and the ways in which those gaps might be exploited, it's actually very hard to establish legal liability on the part of the publisher creates the venue for people to have their say. So with that background, I'm gonna ask Barbara and Pat who have to deal with questions all the time from their clients about the legal liability they may face for building online communities. What keeps you awake at night and what kinds of questions do your clients have? Eric has painted a pretty clear picture here of House Section 230. So are we done? I mean, should we end the panel now? Are there any issues that we should be talking about? Well, I'll start. Frankly, I'm not particularly worried about liability for, let's say, defamation for the comments that are posted, although many of them are extremely libelous because as Eric mentioned, the courts have recognized 230 consistently and repeatedly as protecting news sites against liability for posted comments. That said, there are many problems with those comments that are posted, which are, as I said, many of them are libelous, many are hateful, they're racist, they're vile, they're disgusting, people will post pornographic images over and over again. So we started our online communities and there was some discussion about why would a news organization want an online community in the first place? When Gannett launched its online communities, the thought was that we wanted to be a big tent in the community for discussion of ideas, topics, et cetera. And indeed, it has worked out to be a good strategy and I think it's appreciated within the communities. But you think of this nice, vital, civic discussion going on ideally when in fact, often what happens is quite different. I mean, some of our newspapers have had to close down comments on particular stories, often crime stories, stories that might involve minorities because of the racist comments. They're cruel comments that are posted. For example, in one of our newspapers at a smaller community, 11th grader was killed in a car accident and you would think people would go online offering sympathy for the family instead. And who these people are, I don't know, but there were a number of people who went on and said, well, it just goes to show those parents weren't strict enough, they didn't have good rules. An 11th grader shouldn't be out at that time of night in a car and piling on comment after comment, critical of the parents of the child who had just died. I mean, this is really outrageous behavior. So what keeps me up late at night is not so much fear of liability for libel, but rather concern for the editors who have to somehow chart a course that is good for the community, good for discussion and consistent with the newspaper's mission with these forums open to the public. And just picking up on what Barbara says, from my point of view, the question about how to, from a news organization dealing with this user generated content, it for the most part really becomes a business judgment and a judgment about protecting your franchise, your reputation, your name, your community, rather than legal liability or the risk of liability driving the type of policing that these kinds of online communities or news organizations do in response to the question of sort of what are the legal issues that are out there? They're really, when it comes to third party content, I mean, it's sort of looking at where are the edges of the immunity? And Eric mentioned the exceptions, that's one set of edges. And there has been some question about, well, is a right of publicity claim, is that intellectual property or not? And does that fall within the exception and there's that kind of issue? Probably the main edge of 230 immunity that comes up is the line between what is third party content and what is not. And that can break down in a variety of ways. Perhaps one is, it's clear that picking and choosing amongst content, the courts have said, well, that doesn't change it to your own content. There is some sense that if you as the website operator request a particular type of content, if you set up a site that was strictly for false and defamatory criticisms of private people and that's really what we're doing, arguably you have sufficiently induced that content so that you too are partly responsible for its creation or development, which is really the legal test, is are you responsible in part for the creation or development of the content? But there really haven't been many cases like that. A very interesting category for news organizations which often have both employees and independent contractors or freelancers working for them is, well, it's normally, I normally sort of as an operating assumption go in thinking, well, if you're an employee and you're operating within the scope of your employment at least, that's first-party content. If your employer is the website operator, so there's no immunity for your own employee's content that's produced in the scope of their employment at least. To the extent there's law out there, it would indicate that if you're an independent contractor, who is not, and I think of it as being broken down according to the common law test between employees and agents on the one hand and independent contractors on the other, if you're an independent contractor, that's third-party content. The case that most clearly addressed that, it was one of the early 230 cases was the Blumenthal versus Drudge and AOL case where AOL was, even though it was Matt Drudge's sole source of income at the time in the terms of the royalty payments it paid to Matt Drudge to have a special Matt Drudge version of the Drudge report on AOL, Drudge was deemed to be third-party, his content was third-party content. A lot of folks ask whether, a lot of my clients will ask whether, can I go in there and edit the content without the third-party content, without sacrificing the immunity? And I always, my pad answer usually is, it's safest to sort of follow a, either leave it all up or take it down rule, or if you do take, what you want to be most careful of is not changing the meaning of a post such that you actually have sort of added the unlawful or tortuous content yourself. The worst example would be taking out a knot as to whether or not someone's been convicted of a crime, for example. I wanna interrupt, can I interject something on that? So that issue comes up a lot, we do a lot of training with online editors and they always come with this misconception that if you touch it, you own it. And I don't know where that comes from and clearly that section 230 doesn't say that. I would bet that Eric's view would be slightly, I mean you're painting a picture then that's a very risk minimization approach. I mean that's a lawyer's job is to communicate to the client what the risks are, let them make a business decision. But there are a lot of news organizations out there that do, or a lot of community development people that get their fingers in the content. And I wanna explore that issue a little bit more. I mean obviously if you materially change the meaning, someone puts a post up and says, David's not a murderer and you take out the word not, so it says David's a murderer, you materially change the meaning. That's at the one extreme. There's a lot of space in there, however, that might be worth exploring and I get Betsy, Barbara, Nodding, and Eric. Yeah, well I think where you own something for libel purposes is where you've added something that's defamatory. Or take something out that would be exculpatory. But I guess risk minimization in a company with over 35,000 employees, we've said the safest thing is up or down. But certainly for a blogger who has comments, et cetera, and can carefully curate those comments, I would defend up to the point of materially adding the defamatory meaning or taking out the exculpatory meaning. But can I just add another interesting twist? And this exists for the forums that are on traditional news sites. And these are user generated comments coming in on story chat or forums or what have you. What I'm finding is that culturally, within the newsroom, it creates a lot of disconnects because you're basically talking from a legal standpoint of two completely parallel universes. Traditional libel law for print and broadcast functions on the premise that tail bearers are as bad as tail makers. And if you repeat a libel, you are responsible for that libel with a few exceptions, such as quoting from an official document or something of that nature. I mean, on TV, for example, if you put the microphone in front of somebody's mouth and they say something defamatory, you're responsible for it. And so I'm having a hard time now with some of the younger reporters coming in who have grown up in the world of forums and chats and the internet in general, convincing them that somehow they're responsible for everything in the newspaper, regardless of the source. And that is true legally. I mean, letters to the editor, advertisements, you name it, if it's in the newspaper, the newspaper's gonna be responsible for it. So I find that an interesting intersection of the generations for one, but secondly, completely different legal frameworks, all existing in the same newsroom and with the same staff. Especially when you have an integrated newsroom when they're doing both online and print. And Eric, you wanted to add something? Yeah, I wanna go back to this notion about the two polar extremes between being an editor of content and taking full liability for all the editorial choices, even the decision not to do something, and taking a passive conduit style approach. And I think those are you trained in taking a media law course or a journalism law class. That's how it was taught to you, that you get to pick. And you may not even get to pick the passive conduit route. That may not be an option for you if you're in the media world, but if you're going to try to fit into some kind of passive conduit approach, the only way you'll avoid liability is by doing nothing. And online, that's not really an option. It's, you know, we could talk about what it means to do nothing, but the reality is that's not an option that's worth considering. What we know happens is if you do nothing with user content, it goes downhill fast. You have to do something to keep the conversation from going in the wrong direction. So now we're only talking about how much are you going to do? It's not what, it's not if, it's how much. And I think it's an old way of thinking about things to say, well now the moment that you've made the decision to intervene in this pool of user content that you're now automatically responsible for all of it. We run to that all the time because that's the way media law used to work. But that's not the way online. And that's why 230 was passed. It was in response to a court decision that said specifically that. And what, I don't know whether John Hart is here yet. John and I always say right after the decision that the Straton-Oakmont case which laid responsibility on prodigy for editing comments because the court said they were acting like a traditional publisher. At that point, obviously we had to tell our clients about that development in the law. And John and I always joke it was the stickiest piece of legal advice we ever gave. Because shortly after that, section 230 was passed. It changed the law. It was passed so that specifically online community editors could edit, delete, take down things that were offensive and not in so doing take on the responsibility of the publisher. But you're right, there's so many people who never got it out of their head that that law changed. And I just want to reinforce that what 230 does is it protects the editorial judgments of online publishers. If you are dealing with third party content, your editorial judgments toward them is exactly what the statute immunizes. And that's what's so counterintuitive because in the traditional publication when you've got your newsroom, people are saying you guys are responsible for everything whereas here when you're doing the exact same function that's what gets you the immunity. In fact, I mean, the law actually has a second prong to it that section 230 that goes beyond just the basic immunity for third party content. Even if the facts were that you in attempting to make a story less offensive to your readers, if you mistakenly though in good faith that was your purpose but if you mistakenly did make it defamatory. In fact, there's a second provision C2 which says you can't be held liable for any act you've done that was part of a good faith effort to restrict or block access to offensive content. So it's a very, very strong set of protections. It's called a good Samaritan statute on the theory that when you're trying to do something good, you shouldn't be penalized for it from the liability standpoint. Another big misconception in this area is around notice that if a news organization gets an email says there's a post up there about me and it's defamatory and I demand you take it down. Does the website, does the news organization lose immunity if it refuses to do that? Now, we've got a great case that sort of answered that question and Pat was involved in it. I mean, actually that is the Zeran case which was the first case decided under the statute back in 97. And many people who had been involved even in the enactment of the statute felt that it wasn't really meant to be an immunity if you were on notice. It was just to sort of put you to the same, the language of the statute is don't treat you as a publisher. And there, and the notion was that, well, there's a different set of rules for distributors. Those are like booksellers, libraries and the like who are sort of clearinghouses for third party content. And the rule for quote distributors, at least in the defamation area and other areas, really as a matter of the First Amendment is, you can't be held liable absent notice. And so there were many people, very smart people, before Zeran was decided, who said, well, it's not gonna turn out to be an immunity if you were on notice because that's not treating you as a publisher, that's quote treating you as a distributor. And that was the main argument that Ken Zeran's lawyers made in that case. And it was rejected that distributor liability is still treating you as the publisher, it's putting you in the shoes of having to make the decisions about whether or not to run or not run stuff and that's classic editorial publisher-like behavior. So that's treating you like a publisher. And that, you know, I, before the case was decided, didn't give us better than a 50-50 shot that it would come out that way. But that has in fact been the, that's been held very, very firm across the court. A few speed bumps, but generally that's right, yeah. I would just add that when I am working with a website that receives a notice of problematic content, if it's covered by 230, we know the answer from a legal standpoint. We can toss a notice in the circular file and it doesn't change our liability one bit. But it does raise the question about the brand of the institution and whether or not it's publishing in accurate information way that it is doing harm to its readers. And so I posed the question back to the website, okay, let's get the lawyers out of the room now. That's really good news now. You can actually have a healthy conversation. With that healthy conversation, what do you wanna do in order to manage your brand and the accuracy of the information you're publishing? Well I think that's a good segue to bring Jeff and Bill into the conversation a little bit because we've been talking about the center and Pat mentioned and Barbara mentioned what they worry about are the edges. So let's move to the edges. And when you're talking about a news organization that's cutting its staff, it's doing a lot more activity online. It's generating these robust communities and it simply doesn't have the people to moderate the discussions. Yet the discussions are very important to the organization's brand, its franchise, that the way it's perceived in the world. Are there ways we can learn from crowdsourcing that news organizations are thinking about or have been doing to try to manage these online discussions? And then we'll talk about what the legal liability might be for them. Yeah, I mean, I'm mindful of, this is actually a formulation from an executive echinette. But I think it's pithy and it's telling. I mean, that's that, you know, the newspapers have for a long time been essentially, historically, been a monologue with a little bit of dialogue taking place in terms of letters to the editor. And that in the internet age, they've just begun to get their heads around the idea of being a dialogue. But what they don't understand generally is that readers don't want a dialogue. They want what this particular executive calls a polylog, which is to say lots of different voices. And more to the point, they just want the newspaper to be the room in which the conversation takes place. They don't always necessarily even care what the newspaper has to say. They just need somewhere, you know, in which to have this conversation. Perhaps a news article serves to instigate this debate or if it's something like the mom sites that you guys do, then maybe it's a bunch of moms who wanna talk about some issues that the moderator has raised, but they don't necessarily want or need the moderator to take, you know, an overweening role in the conversation. And that definitely tracks with, you know, I mean, my mandate is to look at newspapers as one of lots of different verticals in the industries. And it definitely tracks with the process that other companies are having to go through too, which is to understand that they're having to let control of the brand a little bit and understand that there's a process of co-creation going on where these communities are, you know, want and will demand to have a say. And is there a way to enlist the community itself in the moderation process? That's a standard formula at this point. Yeah, absolutely. Whether that works, you know, 100% of the time is another question, but yeah. Are you seeing any of that with the folks you work with? Well, it's interesting. I live in Williamstown, Massachusetts, and a group of us have just formed a nonprofit to start up a local online news community in Williamstown. And yesterday at lunch, we were talking about this question, well, what are we gonna do about comments? And one of my colleagues said, look, in a community this small, that will take care of itself because you know who everybody is. And so I think the central issue is whether or not you allow anonymous comments. Most, I would say generally most smaller websites don't allow anonymous comments anymore. Some of the larger ones can get away with it because of section 230, but I think that this issue of figuring out a way to create a sense of community is the answer, and I was just thinking here as we were talking, that it's almost a little bit like the Scarlet Letter where if you have a group that is nichey enough and close enough, the person who makes the out of bounds comment is gradually going to be shunned by the community. And so then that person has a decision to make. Do they wanna be an outlier and be shunned by the community or do they wanna play nice? Bill, just to clarify, that involves a repeat play situation where everyone is interacting with each other in multiple iterations. Obviously when you throw the door open to the web, you always have the risk of interlopers and those people may or may not by by the same rules. That's why I think this is more viable if you're talking about a specific topical or geographical niche where the people are interacting with each other on an ongoing basis. There also are some technological tools that are available. For example, in the Pluck platform, there's a reported views button. I'm sure you all have seen that. And there are various options available for the website, but most will choose the option that says if there are three reported views buttons on one comment, it goes into a holding area. And in the holding area, it will be reviewed by somebody from the newspaper or the TV station and they'll decide whether to put it back or leave it out. But of course, these online communities are very vibrant, vital places. And so in some of our communities, we've seen polarization, let's say, of political views. And so if someone posts a comment that is conservative and the liberals don't want that to be displayed, they'll put reported views. And then pretty soon, all the conservative comments are in the holding area or vice versa. I mean, so no tool is perfect, we've learned. So I wanna go back to a comment that Bill mentioned about having the reporters engage with the community as well. And some news organizations are doing that through blogs. Some are allowing their reporters to actually post comments and participate in the discussion. Some other news organizations say no, you're not gonna be comment at all. We don't want you interacting that way. Do you have a feel, especially Barbara, is there a policy in Gannett on allowing reporters to engage in those discussions beyond the one piece they've done? They can, although, and I think this is universal throughout Gannett, the comments appear in a special coloring so that it's clear that it's somebody from the newspaper who's making a comment. And it's useful sometimes to get discussion going or to try to throw out another topic that might be productive for the community to discuss. But when we were first starting out, there were reporters who would do that and then some thought, well, gee, that's misleading. It's not 100% clear that it's a reporter who's making that comment, which is a valid point. So in any event, we've been sort of finding our way. I think it varies depending on the type of story and the time of day and who has time to do it, but it can be very productive. Why the special? I mean, I think I understand what you're saying, but why the special link? So, okay, so if you're, let's say you live in Des Moines and you come to the Des Moines Register website and there's a report about the school system and people are saying, well, I really think the union negotiations are having a negative impact on the teaching and the community. This, that and the other thing and the reporter goes on and makes a comment. And this has actually happened at night from home. Perhaps a comment that would be biased one way or another or would send the conversation in a certain direction or another, it was felt that the people who were using the forum should know that that's a staffer doing it as opposed to just some member of the community. I mean, there's no requirement under section 230. Oh, God, no, no, this is just an ethical point. And really, actually, there were stories floating around because every newspaper was adopting forums at about the same time that reporters would be commenting on their own story about what a great story it was and things like that. So yeah, there you go. So there was a little bit of that behind the special color ink too. Do you worry as a lawyer advising clients that have reporters who are engaging in the sort of hurly burly of online communities that they're going to somehow get the organization in trouble? For example, when I was at the post, we were always worried about the reporter, you know, we worked very carefully with them to get the piece out. It came out in the paper. Then the next day they're on the local news station doing an interview about the story. And they start talking about all kinds of other things. And all of a sudden you're worried, wow. Yeah, no. We really squared the corners on this and now all of a sudden it's all ragged. Do you worry about that and what role, for example, the lawyers play in that? Well, you can't sit next to them all day long. And so, you know, I just have to remind them that, and again, it goes to the point that Eric was making, you know, in an online forum, we as an organization are 100% responsible for the comments that are posted by our reporter and 0% responsible for the other comments. And so, they have to be mindful of the fact that every comment that they make is one that we would have to defend both legally and that they would have to defend journalistically. Not that they would really want to go out on a limb on that. And as well, they wouldn't want to make comments that might reveal confidential source or other things that, again, as you point out, you're very careful with in constructing a new story for print. And the hurly-burly of the conversation might get a little bit less structured. And mistakes can happen, but we've never had any, actually. How does it work at Wired, Jeff? Are you given any, is there a policy for you engaging with commenters on the pieces that you write? God, no. One, because the DNA of Wired is that we came up from, I mean, it was created by a bunch of geeks on the left coast. So we were part of that culture where, especially the early internet, I mean, it was coming out of whole earth catalog and some very, in the EFF. So it was very oriented towards open source and the absence of restrictions. But also because I think our readers would not expect that. And also because Wired is written by independent contractors. It's not written by staff. It's not like a newspaper. Well, let me just get this. I mean, you're saying there aren't rules, but you're not gonna post something that's false, correct? Well, there are rules I follow as a journalist. That's all I was talking about. There's no policy. I mean, there's no policy that I'm following. It's like, Wired has never had to say, you who write for Wired should publish the truth. I mean, that's assumed that we will do our best. There's rigorous fact check that goes on. We do have legal review, but I'm allowed. There's no prior review. So it doesn't sound that different to me then. Is it different though, Jeff, between the legal review that happens on the primary piece you write and then potential comments that you engage in with the? No one's ever drawn that distinction. And I guess what I'm mindful of is a lot of newspapers is a big long debate of can reporters tweet? Like, I mean, that was never, I mean, that was a risible debate at Wired. We were like, what? Like, really? Like, that's even like a consideration that some of you guys are talking about? Like, of course you can tweet. Like, I am, you know, I'm Jeff Howell and I also write for Wired and the readers are cognizant that there's, that both identities, it's different on newspaper. And I think for a lot of good reasons, I'm just explaining why Wired operates the way it operates. Well, one of the things that has been much discussed of late are the ethical guidelines that AP and other news organizations have issued on social media and tweets. And the thought there is one that's, you know, existed in newsrooms for many years and that is you don't want to, you don't want to tip your political preferences so that the reporting that you do will be undercut. And so some of the guidelines that I know people have made a lot of fun of are really designed to reinforce the notion that reporters are meant to be objective and I know online maybe that there's some skepticism as to whether anybody can be objective but we're still trying to produce objective reporting that is not biased politically one way or another and tweets and social media to the extent that they might involve tipping one's political views are I think still an issue for news organizations that are more traditional than Wired perhaps. So this is something where I think we're seeing another facet of desegregation between print publishers and their reporters. A lot of times reporters develop their own fan base but the publishers haven't made it all that easy for the fans to follow all the stories that that particular reporter writes. And when I talk to reporters I encourage all them to set up a Twitter account not because they might reveal new information about themselves that might undercut their journalistic integrity but simply because this will be a way for their audience to find the stories that they write and I say you should have an account where every time you post a story that has your byline you should provide a link to it that way I can see everything that reporter is writing and I've been watching reporters have actually been doing that and also they develop massive audiences that have been hidden by the fact that they're just part of this large organization that otherwise was obscuring their fan base and so in some sense we're seeing reporters break away from the brand of the publication associated with developing their own fan bases and that's actually becoming more powerful than the audience of the publication they work for. So part of the title for this panel is anonymity and we've sort of brushed the subject a little bit but now I wanna spend some time diving into it because a lot of what we're talking about in anonymity underlies both the challenges I would say primarily the challenges but some opportunities as well. And Eric you mentioned earlier that if you're going to allow it goes to Bill's point about the community to sort of create through norms some self policing you need some form of consistent identity and that's a big issue for news organizations whether in the first instance to allow anonymous commenters at all to the site it seems pretty clear unless I need to disagree there's no difference in the legal liability a news organization has for allowing online anonymous comments or not. It's a question of what kind of a community do they want and then some other perhaps some other journalistic decisions. Can we just clarify our terms by the way because the term anonymous online is always filled with peril. What I think you mean by anonymous is where someone is able to not attribute the source to their name but actually then converting that to true anonymity would require a substantial amount of work on the part of the commenter to mask their tracks so they could never be traced. Otherwise a lot of times we're really talking about pseudonymity that they have some type of masking identifier that the comments will be attributed to but that someone doing the appropriate work would be able to figure out the ultimate identity of the person who's posted. And so I wanna make sure we're talking about the different facets. I think a lot of time when we talk about anonymity online we're really talking about pseudonymity which means we're not talking about anonymity and in fact people can be identified for the work for the contributions they're making. Definitely. Just to be clear the pseudonym is often in fact usually anonymous. I mean I agree with you right but that is what it looks like if I'm on the forum is that right? It can be totally, it could be that it could be. Or they've made up a name. They could be an alias that they pick or it could be an IP address and it could be their IP address and each of those would be called anonymous by I think lay people but I don't think that all of them would have the same implications. Yeah. No it's a great point. I mean it's certainly true that for the most part your identity can be obtained even if you've gone to significant lengths to try to block it. And there's a lot of law developing about what are the circumstances in which that can lawfully occur. Tying it to section 230 a bit in my mind there's a trade off you can't have the website both saying we're not immune and we're not responsible for third party content and at the same time blocking the ability of a potential plaintiff who has a real cause of action really needs that information from being able to secure the online entities cooperation in turning over information to identify the tort fees if it really is a tort user. I think there should be lots of protections in place and the courts are putting them there to prevent that from happening for the wrong reasons and for not really the furtherance of a real good faith cause of action. You're not saying however Pat that there's some obligation on the news organization to create those records in the first place and maintain them for some period of time in order to provide it in response to a request. I think that's right. You may not, you don't have to keep the registration information unless you've got it temporarily and you receive legal process or a notice that you're about to receive legal process and you can't at that point you may end up having an obligation to keep it for temporarily. But it's often the case that even if you haven't had registration information you still have in your deep in your web logs traces of information that can be used to track down may take several steps, several lines of subpoenas to find the, who the speaker was. But Pat, just to be clear, a website that flushed its server logs instantaneously that did not capture anything about the usage of its servers would still be fully eligible for 230 for user supply. Absolutely, absolutely. In my experience, there aren't many websites that do that. No, but they could and 230 contemplates that and so it does set up the potential where the website says, look, leave us alone. Go after the bad guys. Oh, but can't help you find the bad guys. Sorry, not our problem. And 230 does set that up. And in my opinion, that's actually a part of the statutory brilliance, not a defect in the statute. I don't disagree at all. In the Xerian case itself, no one was ever able to find the identity of the true author of the material. So if a site does have that information, what's its obligation to turn it over? I mean, is it somebody who makes a phone call and says, I don't like what this commenter said about me. Can you identify them? Is that typically enough for a new, when most news organizations turn that information over? I think it varies from news organization to news organization. At GANAT, our policy is not to turn over the information in response to requests. And you know, you ask what keeps me up at night. We get dozens and dozens and dozens of requests for the identity of posters. And before today's session, I thought I'd just towed up how many formal subpoenas we'd received and how many we've fought in court because we do that as well. And again, against the backdrop of hundreds of requests, we received 21 subpoenas. Of those subpoenas, five were from individuals who felt they'd been defamed by a poster. 13 were from the police or the government or a grand jury. And three were requests for the commenter's identity because it appeared that the commenter had information about a civil lawsuit and could give testimony in that civil lawsuit. And of those, 2115 were withdrawn after we jaw-boned the lawyer a bit about the right to anonymous speech and such and the fact that we would make a motion to quash the subpoena. In six of the cases, we actually made motions to quash the subpoena. We had two victories where the subpoenas were quashed. Two more were withdrawn. And two resulted in decisions, which I like to call half a loaf, because those are decisions where the court said, well, we're gonna make you turn over the information, but that's because the plaintiff has met a higher test, a higher threshold that he or she is entitled to that information. So all that's to say that at Gannett, we think, we owe something to the posters who come to our site and post-anonymously and expect to remain anonymous. That said, there is a limit to that. I mentioned on the call the other day, the day after the Virginia Tech shootings, someone went on one of our websites and said, Cho, who was the shooter at Virginia Tech, is my hero. I have guns. I can't wait to go to school tomorrow. Show of hands. How many people think you had to protect the identity of that poster? Yeah, I mean, so they're easy. Did I see one? There was one. I won't point the person out. Oh, okay. And you have to say it wrong with that. Barbara, can you clarify, you said 21. 21 in what time period? Well, probably since 2007. So in the last three years or so. That's not that many, given the size of your organization. Well, as I said, we get many, many, many, many requests that we just talk the person down on. And none of those were copyright related? None. You didn't get any? The big online organizations, Google, Yahoo, AOL, in the past, have whole staffs devoted to processing subpoenas from both law enforcement and private parties. On the whole, the first point of guidance, I think, for a news organization on what to do with in response to requests for identity or other user information or subpoenas, is to look at the privacy policy that virtually any organization is going to have in which it sets out its policies. Those will likely be viewed by a court as setting up obligations that you've undertaken to honor. And so there's a lot of care, needs to go into how an organization writes that policy in terms of whether it has left itself the freedom to do what Barbara just described in the Virginia Tech situation or not. And there ends up being a need to pick and choose also where to draw the line and to fight the subpoena. And the AOLs, Yahoo's, Google's of the world have, in many cases, when there's a sense of abuse of the process going on, often these John Doe suits where somebody just with no judicial supervision at all just sets up a lawsuit and uses it as a platform for issuing subpoenas. That's a potentially extremely abusive situation where there's no one else to push back but the recipient of the subpoena. In many cases, those large organizations do push back. But I think in the vast majority of the cases, they just don't have the wherewithal and the stake in it to actually push back every time. They do very hard and I think news organizations do the same to try to give notice to the, if they have it, to if they have the knowledge of who the real person is behind the pseudonym to give that person notice so he or she can take their own steps to protect their identity. And there have been a number of cases, although not a huge number, where people have had the wherewithal they've been able to find lawyers through folks like David or others to be able to represent themselves and resist that. Jeff and Bill, is it important that there be spaces online for people to engage in anonymous speech? Why should we care if news organizations start to decide that they're simply not gonna let anonymous comments? Is that a problem? Yeah, I think it's a problem. I mean, from broadsheets and the Federalist Papers right up to current time and wanting to protect whistleblowers or give them a place to be protected from retribution, we need pseudonymous comments. But I think if I were running a news organization now, I would try to figure out a bifurcated system where there would be a place for pseudonymous comments but also a place where people who aren't comfortable experiencing that kind of comment can be in the same sort of environment as they were in the letters to the other world of the print newspaper. I really like what Bill has said. I mean, I think one thing that's hurt us is the conviction of how important anonymous anonymous comments are for all the excellent reasons Bill's listed and that is true. But there's just, I feel that that zealotry ignores the fact that anonymous commenters have really poisoned the well. And we have to figure out a way that the majority of people who just wanna come and talk about an article that their reputation needs to be at stake to some extent so that they can be held to account. Because otherwise just the comments are. And I would add, I'm curious if you do your own self-reflection because I've looked at my RSS reader and in the end I know the author of every single person in my RSS reader. I've never been able to stick with a pseudonymous blogger for example. It's not that I philosophically opposed to it. I think it's great. But almost always knowing the source of the content and why they're saying what they're saying is essential for me to give it credibility. Actually a number of newspapers around the country are experimenting with comments on their Facebook fan pages. And those comments are attributed because it's within the Facebook environment. Now I assume somebody could have a fake Facebook page but and what I've heard is that the comments on the Facebook pages are much more thoughtful and the conversation that develops is more coherent and actually productive than the anonymous comment discussions. So, and do you want me to mention what the Washington Post has decided to do? Yeah, I mean so the Washington Post phased tiered implementation that's gonna be having I think is an excellent example of using. Yeah, I think it grows out of the same kind of frustration with the uncivil discussion, rude, obnoxious behavior that sometimes develops online. And as I mentioned, we have this three strikes you're out tool. Apparently the Post is adopting something even more nuanced where readers will be divided into tiers. So the top tier would be trusted commenters, middle tier would be I guess some comments hadn't been vetted quite so carefully and then the third would be purely anonymous, first time posters, et cetera. And so as a user of the site, you could choose which discussion you wanted to take part in. And for first time users of the sites, the comments that you would see first were those from the trusted advisors. So you'd have to dig really deep to see those anonymous posts that can get sort of rowdy. And the Post has had a Bozo filter for a long time which I think is a wonderfully named way of allowing the people who make these really outlandish comments to think that their comments are still up on the site but in fact they're the only ones who see it. Everyone else who comes to the site doesn't see the comments. We have that too, it's part of the plug technology. You better keep it a secret. It's part of I think the 1990s way of thinking that when you're dealing with user comments you have two choices, either don't do them or every comment gets treated equally and you have a dumb sorting filter like chronology. And I think we've learned that that doesn't give the right results that not every comment is equal and that you do have to make distinctions among comments. Ideally you can make them on an automated basis rather manually but that's the only way I think to allow people to have their say and knock it overwhelmed by the junk. And if you think about it, the real world analogy is what you would hear in your private club is different from what you'd hear at the supermarket that you go to in your community is different from what you hear outside the public bathroom on Main Street. So before we get to Q and A there's a sort of wonderful gift we've been given, a little bit of a teaching moment that brings a lot of these issues to the forefront involving the Cleveland Plain dealer which I'm not sure how long ago but they noticed that there was a commenter on their site with the acronym with the moniker Lawmiss who had been posting for quite a long period of time and was recently posting comments about the mental state of one of the Plain dealer's reporter's relatives. And that reporter then went into the system, the registration system determined the email address that had been used to register the account, went and looked back over the history of this person's commenting on the site which had been very extensive and saw that they had been posting a lot about legal cases in Cuyahoga County court system. They put two and two together and determined that the person who was using this account was in fact a judge on the bench in Cuyahoga County and that those comments were being made about parties in cases in front of her about the lawyers in the cases. And I would imagine, I don't know this from the reporting on it, that the reporter took that information to their editor and said, do we have a story? And the editor said, we've got a judge here who's potentially violating her obligations as a judge. I think we need to report on it and they did and they outed her. And that created quite a bruja up until about last Tuesday she had been saying that her daughter or a family member had been using the account. That's the classic thing you see in these cases. It's anybody can use it and it isn't me. When on Tuesday she filed a lawsuit against the Cleveland Plain dealer claiming breach of contract and so it goes to our question about what did the privacy policy permit the news organization to do with the information but before we dive into that, I first wanna tackle the very first issue which is when a news organization has the opportunity to examine the records associated with commenters on their site, should they? And then once we answer that question, once they get information like that, what's the decision making process that goes into whether or not to make the information public? I'll take a quick shot at that. I think the answer is it depends what their terms of service and privacy policies are with their users. Because Barbara found a reference to the filed case, I read the lawsuit and the Plain Dealer's policy says that they will hold your stuff confidential except in cases of legal process and other official things. It uses the word official. So I think they're gonna end up in the position of having to argue that what they were doing was official in the sense that they were starting a kind of official internal review of what was going on. But I think that probably there's gonna be some review about that language now there and at other news organizations. I think that you just gonna have to have very explicit privacy in terms of use policies and stick to them. I don't know why the news organization, why the newsroom should have special access to that data that other people don't have. Well, this was not a Gannett paper, advanced publications. And shortly after they wrote the story identifying the commenter, they did change their policy. The newsroom no longer has access to the information that's collected as part of user registrations. So they've sort of answered that question Bill. Going forward, they're not gonna be put in that position. But so we've got two interests and tension here. One, we've got this idea of maintaining a community that can rely on expectations, expectations around their privacy. And on the other hand, you have a journalistic obligation in a sense or certainly an interest in reporting on a judge sitting in the community who may be violating her oath as a judge. How do you resolve the tension like that? Does a lawyer get involved in that typically? Well, we had a situation somewhat similar, I think distinguishable in a couple of key respects at our newspaper in Pensacola, where the newspaper got a tip that the author of some comments that had been posted about the school system was actually an elected member of the school board. And the comments that this particular poster had put on the site were uniformly racist. So for example, when there was a story about the test scores falling, this poster went under the name by the way, Godzilla. Godzilla went on and said, well, if you compare those scores for the white districts with the minority districts, you'll see where the real weakness lies and so on and so on. And consistent and repeated racism in these comments. And so the newspaper called the school board member to say, is that, have you been making these posts? And the school board member said, you caught me. And at that point, they were about to publish the story and then the school board member called back and said, well, actually I'm not Godzilla. I just think a lot like Godzilla. So they were like, what? So somebody said, well, let's just check to see who has signed up for this. Talk about anonymity. Who this is that's posting these comments and they did indeed have the email address and it was the same email address that the school board member had used when writing op-eds for the newspaper. So that cleared up the confusion over who or what Godzilla was or thought but in any event, so they did publish it. And huge outcry developed her community. I happened to be the ethics officer for our company and we must have gotten 30 ethics complaints within a two day period from people saying that the newspaper had acted on ethically and unmasking the school board member. And there was even a CJR article on it. And the CJR article concluded that it's a kind of a murky area but I was pleased because the article looked at our terms of service and said what they did was justified under the terms of service because ours were written a little bit differently than the Cleveland Plain Dealers were. But in any event, I think it's a very gray area but in that situation, the editor of the paper, I asked him when the Cleveland Plain Dealers situation arose and said, what do you think about your decision to unmask Godzilla? And he said, I've never thought twice about it. We had news under our roof about an elected public official. The conduct that was being displayed on our website was reprehensible for elected school board member and we felt the community should know about it. But I think it's a good example of the clash of cultures, values, et cetera, that can develop when you've got a news organization, traditional news organization and an online community under the same roof. And we saw a similar set of facts involved in the St. Louis Dispatch as well a couple months ago where they had a comment that violated their terms of service was uncivil and the person who, the online editor who moderates the comments looked into the registration system saw that the email address came from a local school. I think it was a teacher and notified the school administrator that this person had posted this inappropriate comment and then the person was fired. So I mean news organizations are grappling with this. That was for, that almost seemed like tattling to me. It wasn't for news gathering or news. Public interest. Yeah, no public interest there. So returning to this idea of, obviously we're lawyers and we're sitting thinking and advising our clients on what they could do in a situation like that. First thing we look is at the terms of service to determine what limitations there may be. But when you're actually, I would imagine that a lot of news organizations right now are examining their terms of service to make sure that going forward, they're giving themselves the ability to do that. How do you advise clients on that topic? I mean, on the one hand, you wanna be transparent to the users of your site about what you're gonna be doing with the information. And you wanna make some promises to develop a robust community, but you also wanna give the news organization flexibility. Yeah, I mean, normally my advice to these sorts of website operators is to try to consider in advance the situations in which you may need or want to disclose and make sure you haven't promised that you haven't promised more than you're gonna actually back up with your behavior. And once you've made the promise, I think you really need, then you need to stick to it until you get ordered by a court to cough it up. My advice generally is in the nature of broadening the client's statements about where they will disclose or where they may disclose. For example, not to say in response to, we'll turn it over in response to valid legal process. I'll take out the word valid or otherwise. I want to really be thinking defensively on behalf of my client to not get into a situation where I'm gonna have to be coughing information up where I arguably have promised that I won't. I think this is a Fox garden in the henhouse problem that we don't run into this in most other circumstances where you have online publishers who are operating online commenting functions. They really don't care the identity of the people and it would never occur to them to do a regression to say who are these people and what are they doing when they get the legal requests, when they get the subpoenas or the court orders, then of course they deal with it. But when you deal with a news organization that thinks everything that is news needs to be reported upon, you have a Fox garden in the henhouse problem. I mean, I really think the appropriate response is to set up a little bit of church and state between the online community and the news gathering function. It just seems like otherwise you have to say everything's fair game because that's the way it's going to be treated in practice. I mean, that was the problem here with the plain dealer. I mean, you can see what their immediate reaction was was to set up apparently, what's been reported, to set up the church state wall. The problem is that once you, for whatever reason, good or bad, mistake or not, have that information and now you know it's newsworthy, now I've got this overarching obligation to my community. When you give a tip, I see that it's entirely different than when you get it as someone rooting around on their own database. And so if you set up that wall properly, then the news gathering people shouldn't have any tainting or conflicts. As a reporter, let me clarify. If the tip has come in, then can I, as the reporter, access that data to do that regressive? No, right? You shouldn't be able to. I agree, I agree. I think that's, and I think this is, I mean, this isn't from a legal perspective at all. It's just from someone who's been active in online communities for a long time. It sort of gets to this kind of hubristic idea on the part of newspapers I think a lot of the time that well, you know, I mean, we built this house so we decide what the house rules are. Well, you forget that there are houses all down the block. Everyone's gonna go, you know what, there's a lot of places in Cincinnati to hold an online forum and they'll go somewhere where, like you said, most online publishers don't care and they'll offer very generous protective terms of service. So I think that newspapers as a competitive, as a strategic priority, need to think of let's protect these people, let's make them feel valued. Someone over at the questions and Lucy, did you? Yeah. Oh. Okay. I beat you to it. So go ahead, maybe you can push the button on your microphone, the red light will come on, just make sure you push it after you're done. Hi, I had a question about one other potential legal consequence of allowing comments on newspaper website forums, which is the potential use of user comments as evidence of damage of reputation in a defamation action. And I've been involved in a number of defamation lawsuits where the plaintiff has sought to introduce evidence of user comments, not for direct liability for the publishing of the contents, for the publishing of the comments, but to show the damage that the original news article caused to the reputation of the person discussed in the article. And I've seen cases where those have been allowed in, not withstanding a section 230 defense, because you're not seeking to hold them directly responsible for those comments. And while there are other evidentiary objections which can come up, authentication, hearsay, that kind of thing, sometimes they come in under a theory, under the reputation exception to the hearsay rule. So it's just interested in the panel's experience with that issue or whether that's something that you've faced. I have not seen that come up within the language of 230. The question I suppose is, is that use a use that's treating the website operator as the publisher or speaker of those comments? And I haven't thought about it hard, but I think it probably isn't. I'd like to be able to argue to protect the news organization there from even that use of the speech. But I have a feeling that that's gonna be tough to sustain under section 230. I mean, that information would have been available if you could find it even in other circumstances. It's just that it becomes codified and easier to find when the online comments are provided by the publication itself. But I mean, in the end, in my mind, that's just an adjustment for the liability that was created by the defamatory statement. In other words, if you've got the defamation, this just sweetens the pop perhaps, but you need to attack it at the source. I haven't seen that. But I think I agree with Pat and Eric on that. I have a question. This conversation seems to have been focused on US law. Thinking about third-party contact from abroad, how concerned are you or should, how concerned should organizations creating online communities be about how other countries you third-party content? I know sort of vaguely that places like Ireland or maybe the UK have different perspectives on the obligations of news of online communities. Is that something that online communities in the United States should be focused on, or is this merely just sort of an academic question at this point, or would it be something more concerning in the future? This is a real live important question. In general, most other countries' laws are many exceptions, but generally are nowhere near as protective of the website operator as US law is. And that was true even before section 230. We have the strong First Amendment protections and all that goes with that. And there are lots of, since when you publish on the internet, you're publishing or you're disseminating universally, you are exposed potentially to liability in other countries. And there's a lot of analysis to be done as to whether or not, for example, a foreign judgment, if gotten under laws that are not as protective here, could that be enforced in the United States, for example? That helps if all of your assets are here in the US, but if you're a multinational company with assets in lots of places, then you really are sort of stuck having to conform to or live by those rules or choose not to do business there. So that's the kinds of things that Google is having to think about in China. It's a very complicated question, and it's coming up more and more as I'm seeing it. We've definitely seen cases involving US publishers being held liable under foreign laws when they didn't actually publish in that foreign country. I'm thinking of Dow Jones versus Gutton, in many cases, an example. And I wanna sharpen a little bit what Pat said. When I deal with the Silicon Valley companies who have only US offices, we just basically take a carte blanche approach. It's not our problem to worry about international law because there's not much that a foreign country can do to make us pay for it here in the US. Pat could probably sharpen that statement further and say, well, it's not so clear, but for the most part, we take that. When companies launch internationally targeted versions and set up operations on the ground in foreign countries, they have to comply with local law. And so we see these localized diversions of services. And from my perspective, it's a big decision on the part of an internet company decide to do a localized version because they're now basically embracing the laws of that country and they have to build a system that complies that law. Oftentimes with very different DNA than the US site. So they have to just think differently from the get-go about how they architect that site. So a lot of times what I see with companies is that they stay US focused, perhaps a little bit inordinately long because they don't have the resources to go in properly localized services for these foreign jurisdictions and they have to marshal up the capital to be able to do that. Sam? This is a question for Eric. I want you to elaborate on this comment you sort of made in passing because I just find it really interesting. You mentioned that you thought that Section 230, the lack of a records retention provision was intentional and that it was brilliant. And I'd just like to hear you explain why you think both of those things, especially in light of some maybe unfortunate situations we might see like juicy campus websites where they do tend to or did sort of simultaneously encourage defamatory speech, hide behind Section 230 and then not keep very good records. Yeah, I didn't think I was gonna get away, it's got a free on that remark, even if it's said flippantly. And if I said it was intentional, that might have been an overstatement. I didn't mean it that way. I don't really know what the drafters thought, something that we need to do a little bit unpacking, but it was Congress. So, we know the degree of intelligence that went into the overall design of the statute. But the reason why I'm a fan of the 230 including sites that deliberately do not retain records and flush them is what 230 has really done and a lot of what we've been discussing today reflects the fact that there's a great degree of experimentation on the part of online publishers to figure out the best way to deal with their party content. There's a full menu of options. That's not true in other countries. In other countries, the liability regime is locked in, a set of best practices, this is the way that you have to handle user comments. You only have one choice. Here, we have a full range of choices and I think we're still learning about the best choices that might be available. It might be that there's more than one, that we don't simply have the model that we were talking about indirectly here, that you have to have some kind of tight filtration of user comments to make sure the discussion doesn't get out of hand. You can do that under the 230 and you're insulated. There might be communities where the best solution is to let it be free willing, to let it be that everyone can avoid being responsible for their words. I don't know that I would choose to participate in that community, but I think that the fact that we can have that community in our system actually gives us a lot more competitive advantage overall compared to countries where that's not an option. I would add a cautionary note about that. I mean, I agree with sort of let a thousand flowers bloom and that's great and I'm a big fan of free speech to the hilt. This is Congress and Congress can give it what it, can take it away with what it give it. And I think if most websites were suddenly flushing all their content in order to protect tort-feasors on their site, I'm not sure how long section 230 would last. I agree with that, but I would also point out that the marketplace has the potential to punish those people. Juicy campus has gone by way of example. May there will be that there will be other sites that have decided to allow a wild and wooly discussion. The marketplace is gonna drum some of those people out and so we only have a circumstance where sites decide that's in the best interest of their community and the marketplace can support it. And from my perspective, that's a healthy or a part of the continuum. Do you think a site like Unvarnished, which is in beta now and which has been written about a bit in the last few days, is so at the edge of what's reasonable that it will invite changes in section 230. What Unvarnished is purporting to do is they're gonna set up a website where you can make any comment you want about any other person and it stays up. It's like LinkedIn gone rogue. That's a good analogy. And so what I worry about is that that will be the exception that will get folks in Congress saying, geez, we need to amend this 230. I think there's a lot of people worked up about Unvarnished and we have to see if it's viable in the marketplace. So, you know, let's put that aside for a moment. Just to be clear, I would characterize Unvarnished a little bit differently. Unvarnished is a consumer review site. It allows coworkers to evaluate each other and that's a piece of information that actually is extremely difficult to get today. If you look at the ways that you can learn, if someone is a good coworker, that information marketplace has effectively collapsed and Unvarnished offers a potential that it might actually revitalize a market that has been destroyed by people being concerned about their liability. Now, Unvarnished is going to be a pseudonymous site. It's gonna be people who will have some traceability to their remarks and because of that, it's not clear to me that people who at least understand the law will say things that they should not say on there. In other words, anyone who goes off and rips their coworker a new one might very well be looking at that it was a better house decision. That might not be a good sustainable model for Unvarnished or its contributors. But from my perspective, Unvarnished is a logical consequence of this experimentation. It's a marketplace that maybe we need someone to solve. There would be no way to solve it but for the immunization of 230. We wouldn't even have this discussion except for the faculty of the statute. But I'd come back to the marketplace. I remain unconvinced the marketplace is going to value the services at a price that will allow it to sustain. But if it does, then look at what a great win we got. We actually solved an information dilemma. We saw a marketplace collapse about job references or coworker evaluations that maybe we found a solution for it. That would be a really powerful thing. I was wondering just almost kind of on the same note, I think what's interesting is there's kind of a gap in protection right now for victims of certain things under 230, harassment or defamation or even death threats. So if those things are posted online in a comment section, obviously there's on the one hand you could have a site that doesn't keep logs and there's no way to find out who posted it. Even if the site does keep logs though, let's say the site doesn't have an option for a user to delete his own posts. So even if you could take a case to court and win a harassment or a defamation suit and compel the poster to try to take his post down, he can't do it successfully. And now you also can't force the site to take it down because of 230. Just wondering your thoughts on- Just to clarify, we did see that exact example. This is the Blockowitz case. It involved Ripoff Report, which Ripoff Report is by its own terms is designed to have people covetch about vendors in the marketplace. And their policy is very clear. They do not remove comments that have been supplied by users. And so we actually had a situation where one of the posters went to Ripoff Report and said, please take it down. And Ripoff Report said, not yet. And now we have the circumstances. 230 might immunize that choice. And the poster's asking it to be taken down. The harmed victim is asking it to be taken down. And the law seems to suggest that that still doesn't get removed. So I just want to be clear. This is not just a hypothetical example. We have real life case where that issue has come up. I have things to say about it. I've already done too much talking. Well, I think it's a good question. And I teach and after a class I taught on section 230, I had a student come up and she was born in China. Her family had moved when she was just a little girl. And her father had been involved in a situation that where there was some bad reporting on his conduct and I mean inaccurate reporting. And apparently it's stilled out on the internet and she just started bawling. And I'd been sort of, what's all this grumbling? It's the internet, we're section 230, but that really brought it home to me that we really can impact lives. And I've talked to a lawyer at the New York Times about the same thing. I don't think you'd find a website today that wouldn't be willing to, a news website unless it were of the type that Eric just described where their whole reason for being was they promised not to take down comments. I don't think you'd find many websites that wouldn't be willing either to think about taking the comments down or take comments down and perhaps even false reporting. And just to be clear, I think the Raboff Report is extremely unusual as a site that refuses to take down comments at a user's request. We find very few websites like that. So in practice what happens is once a plaintiff successfully gets some kind of adjudication in court almost invariably the service provider then will do the courtesy, whether they're legally required to not of removing the content. I think that you're right that that's a very low percent but I think those are the ones we're most concerned about. Like the auto admit case for example, where I mean, as I've heard the story some favors were called in and it got taken down but presumably they might not have done it had the certain favors not been owed or something like that, you know? And like I said, it's a low number of sites. Don't forget the complaints in auto admit hadn't reached a legal adjudication. As I said, almost always service providers once they have adjudication something is a fanatory or otherwise illegal, they intervene. It's something like the ripoff report that does stand alone and I think you would be hard pressed to find too many other examples about that. As for whether or not we should do something special for those cases, I think that's a discussion we could have. This is a question for the lawyers on the panel. Can you square the lack of a requirement under 230 not to retain records with the requirement under federal discovery laws that if you become aware of a possibility of a claim or a claim that you must retain all electronic records? I don't think there's a conflict. If you're in a situation where the discovery rules or the rules to not exfoliate are in play, they apply just as much to your web logs as they would to anything else. I would think, that can often be a difficult line to know where you are but I'm not aware of a special protection against exfoliation here. Well, I'm thinking of a post that you look at and you know right away that it's either an invasion of privacy or a defamation so you have the likelihood of a claim. Doesn't that fit under the rule? No. I don't think merely seeing what might be a tort. And I've given that some careful thought and looked at the cases and it's not just that it might give rise to a claim but that a claim has been asserted that you think might actually go to court. I think you got. The threshold is higher than just, oh maybe somebody might sue over that one day. I think you gotta be careful about assuming that you can make good accurate assessments about the legitimacy of content from an internal evaluation just by looking at it. We saw that at least in the Biocon versus YouTube lawsuit that YouTube was not in the position to decide what was copyrightable or not given the fact that there were all kinds of explanations for why the videos were on their site even though it looked like it was a copyright infringement. Some clarification on 302, I'm 230, sorry. It sounds as if you said that wire service reports online if you publish those online and freelance reports if you publish those online that they would be third party content and therefore not subject to the same rules as the information that would be in an print newspaper. Is that correct? That's what I said. I hope my co-pilancers agree with that. I would agree with that. And why, does that make any sense? Because in a newspaper, the freelance content. This is good news. It should embrace this. This is good news. This is good news. I don't think you're asking that yet. Are you saying if somebody were to, because it'd be copyright infringement if you didn't have the right to do it. I don't think you're asking that yet. No, I'm saying if I were a freelancer and I were published in the newspaper and I also published something online, what's published in the newspaper would be subject to libel laws, but if it were published online, it would not be? Let's be further defended. No, no, no, no. If you're a reporter for a newspaper and you're basing your... No, I'm a freelancer. And this is a question, whether you, the freelancer is liable or whether the newspaper is liable? Either. Well, the freelancer is gonna be liable for what he or she wrote. The freelancer is... Either way. There's no protection. There's first party content across the board. Even if they got a contract from the, from the, if you got a contract saying that you would not be responsible for any. Someone else can insure you against that, but that's just insurance or something like that. Well, you know what? I think I take a more common sense view of that issue than you do, Pat. I mean, not that yours doesn't make sense, but you were talking about this on the board. Thanks Barbara. I mean, I, in real life, I mean, there are these theories floating out there that if you hire a freelancer, contract with a freelancer to do work on your online site, then you're not responsible for that freelancer's work. And indeed there is a case, and maybe it was your case. Was the judge case yours, Pat? Yeah. Matt Drudge was a blogger for AOL and there was a suit and AOL was not responsible and he was a freelancer in AOL. Paid quite well. And paid quite well. But you know, I feel that if we hire a contractor and instruct that contractor to do this, that or the other thing, and to write a report on a certain subject and that report is posted on our website, I think it'd be a heavy lift to get 230 to apply to that type of situation. And so I think in the common sense world, the newspaper would probably assume responsibility for your work. Okay, because they have with work that I've done, at least the Washington Post has. They have not. Have. Have, yeah. Yeah, well in that, and we would too, I could not. My other question is, say Craigslist does, in terms of advertising, they're not responsible for third party. But if that same advertising were in, say the Washington Post ad section, print, or online, it's gonna be treated differently? Yes. Come on. It's the exact same ad copy. Exact same publisher. What is the logic of that? Well, I have a conversation. I'm gonna have to cut it off if we're running out of time. But the fact is that section 230 is a statute that has radically changed the common law liability around this topic. So those who took media law courses prior to 1996 learned a body of law. That still exists in print and on broadcast, but not online. Is totally different. So you can't take common sense and try to reason your way through. No, no. No. I would say that there is enormous common sense behind the rule that, as David no doubt agrees. I like it, but it doesn't make any sense. What is the common sense? Love it. What is it? Well, it has to do with the huge quantities of information that are flowing and what's possible and whether you can have something like the internet if you don't have a rule like this. Look, this is a subsidy that Congress granted to online publishers as opposed to print publishers. We could question the value of the subsidy. That makes sense. The reality is that we've seen in a massive growth of the online publishing world protected in part by the subsidy. This is all, in my opinion, good news. But so may the print. All right, we have it. So we're gonna have to close. Unfortunately, we have another panel and you can ask them these questions. You may or may not be able to answer it, but I wanna thank my panelists and thank you all.