 Hello. Thank you for coming and watching my talk. My name is Amalia Toledo. I'm talking from Bogota, Colombia. I'm the policy fellow at the Wikimedia Foundation, and I would like to talk today about a very fascinating topic, but something that we are at the policy team are very closely working and following up, which is what is intermediary liability. For that, I'll try to cover the past, the present, and the future by answering to a question what he has meant for the internet, intermediary liability. What are the trends that we are seeing right now, and why it's important for us to work on this issue? So, let's start with the third question. What is intermediary liability, and what has it mean for the internet? Let's take it one step at a time, not to get lost, because this is some legal terms that I don't want to bore you with it, so let's take it one step at a time. First, let's understand what it is, intermediary liability, to answer the question of what it has meant for the internet. An internet intermediary are the entities that provide several different types of services that allow people to use the internet. More or less, we can talk about two big categories, the conduits, which are technical providers of internet access or transmission services, and host, which are content service provider, cashier providers, and storage services. One of the most complex questions related to internet intermediaries is whether they should be liable for the content they host, but are created by others. That is, is the Wikimedia Foundation liable for the content generated on the Wikimedia project by the community? This question already begins to shed light on the core question that we wanted to understand at this point, which is why is intermediary liability? And it happens when governments or private litigants seek to hold internet service provider or ISP liable for legal or harmful content created by the users of other services. Around this idea, legal protection has been created for intermediaries in the United States, in Europe, Brazil, Indiana, and some many other places. These protections allow intermediaries not to be responsible for the content generated by the user. And it's understood that this protection is necessary because, unlike the media intermediaries in principle, do not intervene or do editorial work on user generated content. Instant, they provide a space for people to talk, share ideas, coordinate, and mobilize worldwide. This protection also allows intermediaries the discretion to decide what content they will and will not allow under services. Let's see one example, not from the internet. The public library in your community probably has a wide variety of content, most likely responding to the interests of a user. They may have newspaper, fiction, art, children, literature, and so on. But maybe it's not a special library, so it has decided not to acquire scientific journals. Or maybe because users are primarily children and teenagers, it has decided not to obtain or receive the national periodic material. These are decisions that a library may consider specific to the mission and the community it serves. And it's good that this is the case. Coming back to the internet, it looks as a similar example. There is a social media exclusively for needing and crocheting funds, where needing techniques are shared, needing dates are organized, and patterns are posted, among many other things. It is a growing community gathering around a particular interest. And the platform wants to maintain an atmosphere of inclusiveness, which means that homophobic, xenophobic, misogynistic, et cetera, motifs are not allowed as they go against the social media mission itself. And they make this decision based on the community they serve. And this is good. Other ISP do the same, depending on their mission. They decide what content can be shared on their platform, and in some way it has allowed the internet to be what it is today. Some even say that this protection is responsible for some of the best catastrophes of the internet, such as it has enabled technological innovation and growth. It has fostered online freedom by allowing providers to bring spaces for people to connect. But years of early internet optimism have passed, and the internet is now reached for many, many people. The role of intermediary has changed, and there is more significant and more challenging impact on people's rights when intermediaries make decisions about access and content. And logically, this evolution has also generated the greater interest on the part of the government to regulate intermediary. So this leads us to review the present. What are the trends we are seeing? Today we see how this evolution of the internet and the role of intermediary has led public opinion to demand greater transparency and accountability for intermediaries. And the governments are acting. We know that many do so to make the internet a safer and better space. Unfortunately, good intentions not always translate well into regulation, as they can over-regulate content, increase internet censorship, create surveillance systems, and even restrict legitimate and legal online speech. And here we see kind of two types of regulation, a set of regulation in the first place that under the veneer of good intention, which is increasingly common to find loss, that, for instance, intend to combat the effect of this information or hate speech, of the text shielding on the internet, over-reach by defining vaguely what constitute illegal harmful content, by forcing intermediaries to remove content in very short time frames, sometimes within a few hours, and by creating significant liability risk for intermediaries and their staff for non-compliance. Another set of legislation are those regulations that over-vote attempts to address all harm online. In doing so, a speech critical of the governments are silenced or persecuted, online services are suspended, and even the internet is shut down. So some of these regulations require intermediaries to actively monitor their services to identify harmful content, which may involve setting up filters to monitor who uses the service and control what is published. This pressure is coming from the public opinion, and government also caused many intermediaries to review their policies and practices to respond to the different challenges I just mentioned. Now, on the one hand, we see interest from prominent intermediaries in being more transparent and addressing problems, creating specialized bodies to review cases, et cetera. On the other hand, the public is concerned, and sometimes their actions are suspicious for their following the problem. The situation is far from perfect, and times have changed, so there is some denial need to review and update intermediate liability protection system. So now I kind of go to the future, why this is important for us, for the policy team to work on this issue. To begin with the foundation by hosting Wikipedia and other free knowledge projects is considered an intermediary that is affected by the legal protection system of intermediate liability. But our projects are only alive to the extent that the community thrive and retain its agency and autonomy to create knowledge, ensure its quality, and even remove any content deemed irrelevant, disrupted, or even harmful. And the regulation mentioned previously can affect the community agency and autonomy to do this work. For example, by mandating filters, maybe have the impact of cutting off human editors, the community may be limited in its ability to set its own rule if the regulation sets them instead, among others. In addition, they can discourage the creation of knowledge if the foundation has legal obligation that make it intervene, intervene disproportionately in the content of who creates them, degrading the trust with the community. Or if, on the other hand, volunteer don't feel safe to participate in our project, fearing the legal consequences they may face for sharing knowledge. For the month, we do not believe that this is the best service for free knowledge of the foundation to determine whether or not content is harmful. The community self-governing model in which volunteers discuss and decide whether or not to enforce consent policy is flexible and adequate to resolve many of the disputes over content and even behavior without the need for us to intervene. It's not our job to be censors of what is done and set in our project, and we don't want to keep it that way. Many of these regulations have only commercial platforms in mind when proposed. That is why our work also focuses on educating policymaking and legislator. We want them to understand how our community works and so that proposals do not derail a movement that has just turned 20 years old or frustrated access to knowledge and other human rights. We also want to contribute to the possibility and regulatory condition for other similar knowledge value to emerge. In other words, for us working on these issues also to champion our mission of this one.