 Good morning, John. In August of 2019, the New York Times published an article that was titled Why Hate Speech on the Internet is a Never-Ending Problem? Because this law shields it. The law in question was Section 230 of the Communications Decency Act. Later that day, they published the following retraction. An earlier version of this article incorrectly described the law that protects hate speech on the internet. The First Amendment, not Section 230 of the Communications Decency Act, protects it. Now, this is kind of funny. It's also kind of not, especially if you know the long history of people misunderstanding Section 230 and what it means. It is very difficult for all of us to figure out how to adjust to the power that the internet wields. And when we're doing that, it's pretty easy to look toward what's probably the most important piece of legislation in the history of the internet, which was passed as pretty much a larger bill when Mark Zuckerberg was 11 years old. And then when he was 12 years old, almost all of that law was declared unconstitutional by the Supreme Court, but they left one section, Section 230, which says, among other things, this, no provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. To explain, right now I am the other information content provider, and YouTube is the provider of the interactive computer service. And so, if I slander someone or try to recruit someone to my terrorist cell or otherwise do something illegal, you can sue or fine or imprison me, but you can't hold YouTube liable for what I said. This thing, Section 230, is kind of what made it possible for the many-to-many internet thing to exist, which is good, but also bad. It's the thing that we love about the internet, and it's the thing that we hate about the internet. Right now, the Supreme Court is hearing a case against YouTube that argues that YouTube is not liable for the content in the videos on its platform, but it is liable for the recommendations of that content to people. If the YouTube algorithm recommends terrorist recruitment videos to YouTube users and it isn't just hosting content, it's also kind of creating a new kind of content, which is the recommendation. And that seems a little bit right to me. The problem, though, is where do you draw the line for what counts as a recommendation? Is a Google search a list of recommendations that Google is making to me? Does that mean that every internet platform that has a recommendation system of any kind, meaning all of them, are they liable for what they recommended if they are? Does the internet work at all? It looks like the Supreme Court will not find YouTube liable in this case, which I think is good. But the reality is that there is something of a difference between the front page of YouTube being algorithmically generated for each individual user and me searching for something and getting recommended results. And maybe there's a difference between those two things and chat GPT creating content that I will then read that was nonetheless created by a machine. There's definitely a way to see recommendation as creation. I just don't think that a law passed in 1996 did a good job of anticipating that. And actually, I think that that law had a tremendous amount of foresight and has served us fairly well. There was a moment when the Supreme Court was talking about this, when Justice Kagan said something that I want to play for you now. You know, every other industry has to internalize the costs of its conduct. Why is it that the tech industry gets a pass? On the other hand, I mean, we're a court. We really don't know about these things. You know, these are not like the nine greatest experts on the Internet. The implication being we probably shouldn't be trying to figure this stuff out based on 26 words that were written in 1996. It should be decided by passing new legislation that considers the Internet as it exists now, which is pretty different from when there were interactive computer services. I think that we're actually pretty dang lucky that Section 230 supported the creation of this new tool of many to many communication, which has, I admit, drawbacks. But I don't think anybody's figured out like a good and simple way to tease all the bad parts out from the good parts. And the times that we've attempted to do that haven't had great outcomes. It's very hard to legislate for a world that changes this quickly. So if it feels sometimes like we don't know how to do this, it's because we don't know how to do it. John, I'll see you on Tuesday.