 First of all, thank you for the kind introduction. Thank you for the opportunity to be here and taking time out of your busy days to engage in this conversation today. I'm very happy to be back in Ireland. Ireland is a frequent destination for me. I have a large team here. You may know that Ireland is Facebook's international headquarters. So a significant amount of the work that we do around content and keeping people safe on our services is done out of our Dublin office. It also gives me the opportunity to check out some wonderful different corners of this country. So I spent the weekend on the West Coast where the weather was lovely and I had the opportunity to go to falconry school and do some other fun things. So anyway, I'm very, very happy to be here. I thought I might share with you a little bit about what we do to try to make sure that people who use our services are safe but also have the ability to express themselves and some of the things that we've learned a little bit over the years, the big areas of investment and progress for us and then shift over to why we think that regulation from governments in this space is an essential part of the path forward. My team, as Alex mentioned, my team is responsible for the rules for what people can say on our services, among other things, and also for the implementation of those rules. This is a group of people based worldwide. So yes, I have people in Dublin, I have people in the United States, but my team is actually in 11 offices around the world. These people are hired because of their careers or their expertise in dealing with contents-related issues. So for instance, we have people with expertise in understanding hate organizations or in understanding child safety. As for myself, my background is as an American-trained lawyer who I spent my career, the first 11 years of my career, working to combat child exploitation and violent crimes as a federal criminal prosecutor. So you have people like that that are serving on this team who also maintain relationships with the experts, the academics, the law enforcement personnel, the freedom of expression groups, all of the stakeholders who have valuable perspectives that we want to incorporate into the rules that we draw for content on our site. So we begin with this sort of consultative approach that is worldwide, that incorporates views from stakeholders from what we hope is every sort of dimension on the spectrum. We pull that in, we write our policies, and then the next step is applying those policies. Now the big challenge there, when I was a lawyer, I was used to courtrooms where you have fact-finders that are able to examine witnesses and look at the overall context of something and take a lot of time and make an informed decision. When you're talking about implementing content policies for any of the large internet platforms, you are talking about millions of decisions that need to be made every week. You're also talking about expectations that we make these decisions very quickly. At Facebook, we try to respond to reports from people using our service. If they report something that might be violating our policies, maybe it's bullying or maybe it's hate speech, we respond to the vast majority of those within 24 hours. And we have just a very limited amount of context to be able to tell what the intent is behind that post. So for instance, somebody could post something that might sound like a threat or hate speech to one person, but if you actually understand people who are engaging in this conversation, this is a joke between them, one of them saying, I'm gonna kill you if you show up late to my party, that sort of thing. Understanding the context that is offline, that's playing into the conversation happening online, is not something that we have the luxury of. We are often operating with just a very, very small amount of knowledge. So it's very different trying to implement these rules. So what you take is a broad set of values that we develop with stakeholders into a set of policies, many of the details of that, we work out by looking at data and running things past some external stakeholder groups, we publish that on our site, it's called our community standards. And then we have a workforce that is around the world of content reviewers who speak dozens of languages. And if content is either reported by somebody using the service, or flagged by a government, or flagged by one of our proactive tools, or technical tools, looking for violations, that goes to our content reviewers. Now sometimes our technical tools can actually make the decisions themselves. For instance, if we have a violent video that violates our policies, let's say it's beheading video, that violates our policies no matter how it's shared, even if a news organization shares it to a raise awareness, it still violates our policies. So we can use technology to keep that content from coming onto our sites, technology can recognize it, keep it off the site. But for other things, let's say bullying, or harassment, or most types of hate speech, the technology might tell us this is a violation, or we might get a report from a safety group saying this is a violation, we really trust their word. But that all has to go to our content reviewers, who then look at the speech involved and apply our policies to it and decide whether or not it should be removed from the service. That sort of part of the challenge is actually, in many ways, trickier than writing the initial policies. Seeing what you can actually do around the world to implement these policies at scale is something that we are constantly learning from. A few areas of investment for us recently, one is in the past year and a half or so, we've launched an appeals process where people can actually appeal our decisions to us and we will have somebody else look at it and see if we got it wrong. And we're finding that we learn a lot from this process. We now also publish reports every six months on how much content we've removed in different areas, what our accuracy is in terms of how often are people appealing this and how often do we actually reinstate the content on the site, and how much of this content did we find ourselves before anybody reported it to us? Every time that we've released one of these reports, we've been able to add more data to it and I would expect that trend will continue, we want more transparency, we want people to be able to see what we're doing and we expect to get better there. The proactive tools are also getting better. A few years ago when we started these reports, in terms of hate speech, it was pretty much what people reported to us. We were not good at detecting that. Now about 80% of what we remove for violating our hate speech policies is content that our tools flagged before anyone had reported it and then sent over to our content reviewers. It's far from perfect, but we are making progress in the way that we detect violations. Now, one of the questions that I still hear from people when I'm talking to governments, civil society, people who use the services, is what gives you the right? Why do you get to set these rules? And it's a legitimate question. I mean, we're talking about fundamental values, freedom of expression, dignity, privacy, safety. These are important and often competing values and we don't think that we should be making all of these decisions by ourselves. Now it's true, we do this very consultively. We meet with literally hundreds of experts and organizations around the world to get input into our policies, but that's not quite the same thing as making sure that we have a consultative approach with governments as well. We have a mechanism right now for removing content that is illegal when governments bring it to our attention. And that's quite different from our rules. This is more based on if a government tells us something's illegal, they give us a court order, we'll remove it in that country. But what we released our white paper yesterday, when we released this white paper, that's about something a little bit different. That's about how do we as a society and governments and private sector and civil society start to build a regulatory framework for how companies should engage in content moderation in the first place. The white paper for those of you who have read it, I hope that this was evident for those of you who have not, I'll tell you this going in, the white paper's not a proposal. I'm not a regulator, this is not a regulatory proposal. What this is is a response to questions that we've heard and ideas that we've heard and saying we recognize that there are different approaches that regulators could take. There are trade-offs inherent in any of those and we wanna be clear what those trade-offs are from our perspective. We're not perfect at this, but we've been writing content policies and enforcing them for many years now and we've learned some lessons along the way. And so we'd like to bring that experience to the table and hope that it helps to inform that dialogue. Now should we, I'm happy to sort of kick it over to questions whenever you like or I can go over a little bit more. Certainly if there was another five minutes that you wanted to avail of before we went to questions, that's fine as well. Yeah, I'll just say a little bit. Well, I don't know, is it better to have more Q and A? Okay, one of the things that we've been trying to do when it comes to regulation is build relationships with governments now so that the regulation that comes from these governments is positioned to actually create the right incentives for companies. What I mean by that is you can think of some rules that might sound simple. We wanna tackle hate speech and so we're gonna put this sort of rule in place. But one of the things that we've learned from doing content moderation over time is that it's not about solving one problem. These are hundreds of problems when you talk about content moderation, these are hundreds of problems that are actually linked together. You push in one area and you may be pushing problems to another area. And that sort of tension takes several shapes. For instance, let's think about a specific policy. You have a policy about suicide or self-harm or eating disorders. If you put the policy too far in one direction, then people who are viewing the content might be triggered and do something that is not safe for them. You put the line too far in the other direction and the person who's posting the content is actually silenced and doesn't get the help that he or she needs. And so there could actually be safety ramifications on both sides and it's important to understand both. Another example is companies make decisions about where to invest their engineering resources in detecting violations of content. So if there's a regulation that requires a certain type of investment, that could be good. It could also mean that you are taking resources away from solving other issues that are also very important. Especially for growing and developing companies, this is a really big deal. We wanna make sure that we're taking a holistic approach. And finally, we even see sometimes that different regulatory authorities will have conflicting agendas or conflicting interests. So for instance, you might have one regulator that says you must create proactive detection mechanisms in this area and then we'll have another regulator that will say for privacy reasons, you must never use proactive detection in these different ways and the two things will come up against one another. So in our approach, in the white paper, we tried to point out some of the different incentives that might be created with any of these different regulatory approaches. And we're hopeful that this will be the start of many conversations with governments around the world that will lead us to a better regulatory landscape that will help all of the people who use these services. Thanks.