 Again, thank you all for joining EFF's Fall Member Speak Easy. It's great to see so many people in the audience on a Tuesday morning to learn about digital freedoms. Love to see it. If you're feeling up to it, maybe let's warm up the chat box. Feel free to write where you're from in there. Let everyone know. It's great to see how many supporters are from around the world here, so that's really cool. About twice a year, we like to do these member meetups for everyone to learn about digital freedoms, meet some EFF staff, meet some other like-minded folks. So it's great to see you all here. As just a quick reminder, the reason that we can do this work is because of the support from people like you. We do this because you're able to donate, talk to your senators, that kind of stuff to help us out. So it's really great to see the people that make our work possible here, so thank you. And with that said, we're going to talk about some of the issues EFF has been working on in both the U of S and around the world surrounding governments, justifying increased surveillance and censorship as a way to quote unquote protect the kids. So today, we're going to invite a couple staff members to expand on some of the bills we're fighting, what you can do to push back and how we can get things right. So first up, I'm going to introduce EFF staff attorney Mario Trujillo at EFF Mario focuses on Fourth Amendment and privacy rights. He is also part of the Coders Right project. Prior to joining EFF, Mario was an attorney at the privacy law firm Zwilgen and clerked for a federal magistrate judge on the southern border and worked as a technology policy reporter at the Hill newspaper. Hello, Mario. Hey, Christian, it's good that everyone's here in the chat. I'm going to be talking about a few some constitutional problems with a few state privacy, state child safety laws that have been struck down as unconstitutional. Let me bring up my slides. And so like Christian said, my name's Mario Trujillo. I'm an EFF staff attorney, mostly focusing on privacy. So we're going to talk about a few of these state child safety laws. And so the basic framework is these child safety laws have two features to them. One is age verification and the second one is content blocking. And so it's important to understand how these interact together. And so most of these laws require a company to verify the age of their users. If that user is determined to be a minor, usually that's a person who's under 17, the laws require the service provider to block certain content for that minor. That's either by restricting that minor from accessing the product at all, or it's giving that minor a diminished product. So it's a social media platform with some harmful content removed. And so one way to look at it is age verification is kind of the mechanism that enables the platform to censor or block content. And so we'll talk about each one of those. And so what are the problems with age verification? First, it's important to understand what type of age verification we're talking about. That it goes from least invasive to most invasive. And so at the least invasive, you've probably seen a button that requires you to a test that you're either over the age of 13 or over the age of 17. More invasive than that is a button that requires you to enter your birth date. More invasive than that is a requirement that a platform verify your age through government ID, either a passport or a driver's license. And even more invasive than that is a recording or a screen capture of your face in order to run it through a biometric algorithm for the algorithm to estimate the age of a person based on their facial geometry. And so each one of those has inherent problems, but they all kind of have these four problems within them. And so the first one is privacy and security. And so by implementing an age gate, an age verification system, especially an age verification system that requires government ID, you're sort of eliminating one of the initial sort of bargains of the internet is that you can browse the internet anonymously. The second problem is that age verification systems require data. And so that's data that a technology company can either reuse or resell or repurpose for some other use. It's important to remember that a lot of these laws have restrictions on what the tech company or the platform can do with that age verification data. But you're putting your data in the hands of extra data in the hands of a technology company and you're sort of relying on them to do what's right. So one, there's bad actors who might misuse that data. There's also threats of data breaches and no lock and really protect you from a data breach. And so sometimes the best use restriction is just a collection restriction. Next, there are speech concerns. And the speech concerns overlap with privacy concerns. And so under the US First Amendment, there is a right to access and distribute data anonymously. The age verification systems obviously would hinder that, especially age verification systems that require government IDs. They also, these age verification systems would also deter both adults and children from accessing certain content. And so certain content that is either embarrassing or sensitive, a person might not want to have their name associated with that search term or that query. And so that's going to deter both adults and children from accessing that content. There's also a second deterrence is that some privacy conscious people won't want to make that bargain that I want to read this news article, but I don't want to give up my personal data to do it. And so that's also going to be a deterrence from allowing people to access certain content. Moving down the list to discrimination, this is mostly in the context of age verification systems that require a government ID. First off, many children don't have a government ID. I grew up in a small town and I didn't travel a lot. And so I didn't get a passport until I was 18. I didn't get a driver's license until I was 15 or 16. And so that eliminates a swath of children that don't have a government ID. More than that, though, there's certain populations of the adult population that don't have a government ID that's either focused in low income areas or in the undocumented community. And so that would either eliminate those people from accessing platforms in general or create extra barriers for them to access the platform. And then finally, there are accuracy issues. And this is specifically in regard to the age estimation through biometric collection. This is a new technology that's not perfected and it's questionable whether it could ever be perfected. But at the current state, age estimation can be off by a year or two. And so when you're trying to identify a 16-year-old compared to an 18-year-old, you're going to have a lot of mismatches. So that's going to be either over-inclusive or under-inclusive. So that's the first. Those are the problems with age verification. The second thing is what's the problem with content blocking? I won't spend a lot of time on problems with content blocking because they seem pretty self-evident. If you enact a law that requires platforms to block certain harmful content to children and you don't put up an age gate, that is going to diminish the platform for both adults and children alike. If you put up a age gate and you serve children a diminished product based on a product that has, quote unquote, harmful content removed, that's going to sweep up a lot of protected content even that children have a First Amendment right to access. Children have a, they have maybe diminished First Amendment rights, but they do have First Amendment rights to access content. And so a lot of the laws are written in vague terms to block or censor harmful content. And sometimes that term harmful can be anything. It doesn't sometimes they're written to maybe block pornography, but they're written in a way that it blocks all sexual content. And so that's going to be limiting children from accessing certain protected content. And then finally, children aren't a monolith. It's a lot of these laws are written to stave off harms to minors. And that's, it's sort of vague. Miner is anyone below the age of 17 usually. And so what is theoretically harmful to a minor might not be, well, what's theoretically harmful to an eight year old might not be harmful to a 13 year old might not be harmful to a 16 year old. And so one of the key features of these laws that has major problems is they treat children, you know, all from zero to 16 as sort of one unit that has the same sensibilities. And so how is this played out on the ground? There's already been three court cases that have three courts have issued injunctions, which is just a temporary block of three of these child safety laws, one in Arkansas, one in Texas and one in California. So the one in Arkansas, it requires age verification usually through government ID. And if a person is determined to be a minor, they are blocked from the social media platform, except with parental consent. In Texas, the age verification system worked for online platforms that had a certain amount of sexual content. And if that platform determines that a child's or that person is a minor, the person is completely blocked from that website. But the term sexual content is very vague and it's over inclusive and it can range from anything to from obscenity, which is unprotected to pornography, which is protected to sort of a, just maybe a risque photo or something like that. And so those two laws were struck down mostly because of the age verification. The court said that one, it's gonna prevent adults from accessing protected speech because they're gonna be deterred from entering those platforms. And then two, even when the age verification works and children are blocked from certain content, they're gonna be blocked from content that they have a constitutional right to access. The California law is a little different in that it strongly encourages age estimation. And so that would likely be the technique of a biometric based scan. And that age estimation does two things. One, it requires companies to block certain content. Two, it would actually give children privacy protections. That would sort of be the lever to give children privacy protections. And so at EFF, we believe that age gate would be unconstitutional and it would sort of, that's the way to implement both the content blocking and the privacy provisions. And while we like some of the privacy provisions, even if they wouldn't have been sort of tangled up in this age verification system and content blocking system, those privacy provisions are things we would like in a privacy bill, but when you use an age verification method like age estimation to implement that, that we think that that's not gonna withstand a First Amendment scrutiny. And so those are three laws that right now have been blocked. There are two other laws, one in Texas and then a pair of laws in Utah, which will likely suffer the same fate and then the next couple of months. And to the extent more and more states are passing these laws or at the federal government, more Congress or the Senate passes these laws, they're likely gonna suffer the same fate of just being struck down. The Arkansas, Texas and California law are up on appeal, but as they stand now, they're on hold. And then, so what's the solution? I think here at EFF, we think that strong data privacy legislation can be a solution. And I think it does two things these child safety laws can't do. The first one is that data privacy legislation has a strong track record of being upheld by the courts as constitutional. Just to take two examples, the federal wiretapping laws have been around for about a hundred years. The Supreme Court has called HIPAA, which is a data privacy law that regulates health data, a smart law. And so other laws have also been upheld as constitutional as data privacy legislation. So it's just data privacy legislation is better equipped to survive these court challenges. The second big thing is that data privacy legislation actually gets at one of the root causes of what people perceive as ills online. And that's a sort of a surveillance apparatus that is meant to serve and deliver targeted ads. And so one of our key priorities is to ban behavioral advertising and you add data minimization and then you add strong enforcement mechanisms. We think that gets at a lot of the problems that these child safety laws are trying to address in a sort of censorship regime. We think that data privacy legislation gets at that in a more straightforward manner. And so finally, I'm gonna sort of pivot. I've been talking about state laws. These are laws that have been enacted by states that were about to go into effect that got struck down by the courts. But there are also federal proposals, bills that are being debated in Congress and in the Senate that have many of the same problems though not identical problems. And one of these bills is called the Kids Online Safety Act, short name is COSA. The Senate this week is trying to use a procedural move to pass COSA by unanimous consent. And we have, we've put up an action alert to have our members call and voice their concerns to their senators. We have a action page at EFF. If you just type in COSA, you'll find the action alert. And so we urge you all to spend five minutes today and call your senator if you're in the United States and ask them to not ram through this dangerous child safety law. And so I think that's it for me. And I will hand it back to Christian and I'm happy to answer questions at the end. Great, thank you, Mario. That was really great and super interesting to learn about. We'll do Q and A's for Mario at the end, but for now we'll transition to EFF senior free speech. But let me restart. EFF senior speech and privacy activist page callings. At EFF page focuses on fulfillment of civil liberties and corporate threats to speech and privacy online. Page has worked with governments and activists across the globe to collaboratively facilitate change. Welcome, Paige. And excited to hear what you got to talk about. Thank you so much. And thank you everyone for joining us today. I think we're so excited to talk about this issue which is transcending boundaries across different countries. And this issue is so pervasive to kind of argue again. So like Mario, I'll be sharing some slides. Again, if you have any questions, please do put them in the chat and we'll end up to answer them at the end. So here we have the UK's online safety bill which is a really big piece of legislation. It's now unfortunately the online safety act. So I should enter slides. It's now the online safety act and I'll talk you through how that came to be and why we're frustrated with it essentially. So we just asked us now the kids online safety, all the different legislation US, COSA we have here and when we think about these topics, you might wonder why we care? Why is there such a big privacy issue? And I think essentially it comes down to one thing which is that at our core, we all have the right to private conversation and to determine when we want to share information with our loved ones, with our family members, with our friends, when that happens, who hears it? And the moment and mechanism upon which we communicate that and a bunch of these pieces of legislation really erode on that right. And in the human rights framework that's protected under the rights of privacy in lots of national and international mechanisms on human rights. But at its core, it's really about choosing those moments and these bills kind of take away from that. And unfortunately we're seeing it in many places. So Mario discussed the different pieces of legislation in the US, but unfortunately we also have the online safety bill, now the online safety act in the United Kingdom. We have the child sexual abuse regulation which I know there was some conversation already in the chat about that and that's good news. So I'll touch upon that a bit later in the presentation. But around the world we're really seeing these things and if we look to Australia, there's been some positive developments there to with protecting encryption. But unfortunately with the online safety act, we don't have a kind of trajectory. So it's been a long process working with the online safety act. First it was emerged in 2017 as the internet safety strategy green paper. And this strategy green paper emerged on the back unfortunately of a young girl in the UK who was exposed and saw more than thousands of people, thousands of pieces of self-harm content in the weeks leading to her unfortunately ending her life. When that happened, the government decided to attempt to take action to make the online space as you can see here, the safest space in the world to go online and help shape an internet that is open and vibrant but also protects its users from harms. A big claim, you could say. But that was essentially the goal and that's where we're at. So in 2022, the online safety bill was introduced following, as you can see here in many years of consultations under its fourth prime minister, the online safety act passed just a few weeks ago at the end of October. It's changed a lot since the on internet safety strategy green paper that was pretty targeted as a green paper. It had very kind of specific goals and asks pertaining to children's rights online as I mentioned emerging from the back of the young girl that lost her life or took her life in 2017. And the bill that we ended up with is certainly very different to that. I don't know where that scribble came up. So when we look at kind of what we've been working on with EFI Actions and the online safety bill, there were two kind of big focus areas that we really orient in our work towards. The first was provision called legal but harmful content. And this piece of, this provision, the online safety bill was 260 plus pages. It's thought to contain everything, you know, a provision of pornographic content, to doxing, to enter an encryption to clients that are mandating clients that are scanning. So it was extremely broad. And in this, you know, there were lots of different issues but the first one was the legal but harmful provision and that essentially sought to criminalize any content that's definitely legal but was harmful. So it might be insulting or inflammatory. This was illegal, this is an illegal provision under the UK and European and international law, you can communicate content that's either, you know, shocking, offensive, insulting, and that comes actually from a court case that happened in the UK, hand-decided versus United Kingdom. So that was kind of one of the big core issues and we were working with coalitions. We submitted a consultation, a briefing to the consultation and last year that provision was removed from the online safety bill. So of course you were thinking, great, this is fantastic that online, that this legal but harmful provision has been removed and things are just up from here. Soon we'll get the end of the bill. But unfortunately, once that provision was removed by the government, they decided to go fully committed to eroding the right to end to end encryption. And so that's kind of how our focus oriented in lobbying on this piece of the decision. There were many clauses, specifically clause 110, which subsequently changed to number of different causes, which sought to mandate client side scanning with accredited technology. And that accredited technology would be accredited by the government and the UK's regulator of comms. So all of this is very insular. Of course, we probably all know in this call, you cannot have a technology that just is supposed to scan for a child sexual abuse material or harmful content, a back door for one is a back door for all. It's quite simply a fundamentally impossible and incompatible to have a piece of technology that can scan for one thing and not scan them for everything. So we were trying to make this argument and I think what Mario was saying is this is such a polarizing issue. We were talking lots with politicians, we held briefings, we were communicating with them on why this was such a problematic piece of legislation and specifically this clause monitoring private messages. And many of them were saying, maybe I agree, I agree definitely, but I don't want to take that position publicly because I have to be seen as protecting children online. And if I don't support this bill, it seems like I'm not protecting children online. We were able to kind of get to a stage towards the end of the bill where there was a massive coalition of security researchers, cyber security experts, politicians. We had a number of different apps, Google, Signal, Metta, many of the encryption apps as well and services, Apple saying this piece of legislation is terrible, most of them saying if it's as will we'll remove ourselves from the UK market because we're not prepared to undermine encryption by complying with this legislation. We also held a private briefing in the House of Lords. Things, in the end there was a big commitment, but unfortunately it seemed like it was one step too far for many of these people in the House of, peers in the House of Lords, which they were not prepared to take this stand. They were very much communicating that they were in favor, but it was too much of a political risk for them to defend encryption, which ultimately in the discourse men not caring about children's rights. So the bill passed, here's one of the amendments you're trying to edit, here you can say leave out privately. So we'll go through this for a long time. In the end the bill passed a few weeks ago, as I mentioned but not all hope is lost. So specifically with this bill that makes it quite different to many other pieces of legislation in the UK and internationally really, is that it can't be implemented overnight. So the bill itself is contingent on off-com implementing and creating and then implementing codes of conducts and guidelines and rules and practice to implement this bill. So the next stage and this will take years because as I mentioned, the bill is 260 pages plus long. Every single provision needs to be converted into an operational piece of legislation that can be introduced. So off-com last week introduced their first guidelines. It's nearly a thousand pages long. So we've got a lot of reading ahead of us working on this bill, but it will be step by step. So they've communicated that next year they'll be trying to seek to tackle the issue of encryption. And in that we've got a lot of capacity for influence. They're reaching out to civil society. We had a private meeting with them last week and a number of other civil society organizations. We're in contact with them, building out this piece of legislation into something that can be operationalized into the law. So that's kind of one thing. So it's not definitely, it's not introduced overnight. It's clients that's kind of is not happening right now in the UK as a part of this law. Second is litigation. So maybe you're thinking, this can't be legal. Maybe it's not. I think there are lots of litigation options when we go forward. We've got article eight on the right to privacy, the right to private life under the European Convention. So that's one possible avenue of being able to take a specific component of this legislation to perhaps a judicial review in the United Kingdom or through the European Convention rights to European courts, arguing the lack of the illegality and legitimacy of this piece of legislation. And the campaigning side. So one of our colleagues is in London tomorrow for a meeting with Signal and a number of civil society organizations to really discuss this piece of legislation in the next step. So what can we do in our coalition? What can we do as EFF to make sure that the bill doesn't get implemented essentially? How can we stop this? How can we get an injunction or how can we advocate for people to recognize their rights before it even happens? So I think what's really interesting is that not all hope is lost. We've actually got a period of time now where because the bill cannot be implemented straight away, we can really frame the language and discourse that's going to be going to be introduced. And I think when we look at other pieces of legislation, for example, the CSAR, so the regulations against child sexual abuse in the EU, we are experiencing wins. And so this is not an issue where the discourse is finalized and there is no chance of permeating through that. It was mentioned in the chat already, but this piece of this regulation coming out of the EU is pretty much the same as code, so it's the same as online safety bill. It's attempting to stop the distribution of known content, actions against future content, it has a detection order and of course, it reduces hindsight scanning as a preference. Today, so this is new information as of today, we reached a compromise deal. So following more than 70 organizations working across the EU in Europe, working on this file in the European Union, we were able to reach a compromise by the European Parliament. So here are a few of the big wins. So no mass scanning, which is a huge win. The scanning must be targeted with a specific suspicion with judicial oversight. So unlike the UK's online safety act, which has no judicial oversight or parameters on the use of accredited technology, this is very specifically targeted. Grooming detection is removed from the scope of detection orders. And I think within this parameter, what was really interesting about this regulation is that the EU wanted to introduce AI to detect any harmful content on text messages or in emails. And so removing that and having judicial oversight is really beneficial. Another is a protection and tone encryption, which would be in screaming at the European Union to protect for a long time. So private messaging apps cannot be subjected to any scanning technology. Age verification, as Mario discussed it at length earlier, no magistrate age verification for private messaging and app stores as well as safeguarding its use. The EU can hear is about web crawling in Europol and then also blocking orders. So even now it's restricted, this is still problematic, remains possible on hosting services. So we reached definitely a beneficial compromise based on months and months of advocacy at EFF and then a wider coalition. And the next step, so you might think, what's next? So this will now go to the European Council where they will discuss their decision, which is not necessarily known at this point, but there's an election in the European Union next year. And so if there's no compromise that can be met between the institutions, the file could get stuck. So we're gonna keep the pressure on for a good final deal for consumers for people in the European Union. And I think we can take this win here as an indication and a recognition that whilst we're fighting these bills, there are challenges. Here you can see some actions, we're part of these campaigns, raising awareness, here is for a football match, stop, check, control, European Digital Rights Network have been coordinating this. So it's really fantastic since the initiative. And here is just, I thought, we're talking today about US bills and UK and Europe. And here it kind of all aligns in one picture, which is essentially, we've got people like Ashton Kutcher, he was of course caught off-guard, but coming to the European Union and being able to talk in different chambers and institutions about this bill to protect the children. But our big emphasis to conclude this presentation is, we know that these bills, introducing client-side scanning, eroding rights to end-to-end encryption and undermining technology is not only not protecting the children, but it's indeed putting them at further risk of harm, especially those that rely on encrypted communications and private channels the most. So we'll continue doing that. The UK, there's a lot of potential in that mold, the implementation of the Online Safety Act and in the EU to really push for a final resolution that backs up today's win. Thank you so much. Cool. Thank you so much, Paige. That was great. I think, yeah, we've got Mario back up too. So now we've got some time to answer some of the audience questions that were coming up in the chat. So I think we'll just get started. This first one came from Cadigan, which who said, would it make sense if we somehow all agreed on using a separate trusted party that would only provide this sort of ruling required age 16 meets age true slash false? Normally I oppose eliminating it, but I doubt this issue will go away. Did either of y'all have thoughts? Yeah, so I think that that's sorry. I think that question is at that question comes up in the context of age verification. And the question is, yeah, if we move this age verification to a trusted third party, rather than the tech company, does that make it safer? I think that that just moves the liability without eliminating the risk. And so instead of Metta, who's got that information, it's some third party that is unknown and maybe less trusted. And so that could pose even more concerns in case we don't understand or trust that company. It can also, I think it creates sort of a large honeypot for data breaches. And so I think that rather than fixing the problem, I think it just moves the problem, but all the concerns still remain. Thanks for that. Next up, this one comes from Augusto. I was just wondering, are laws assigning unique identifier numbers to babies at birth a common thing around the world or is it just Brazil? That surely makes it easier for companies to track children's data and build dossiers of their behaviors since birth. Nowadays, you can't do anything in Brazil without a kid's CPF number. Have either of you all heard about that? I can talk about that from a US perspective. And so everyone in the United States has a social security number. The social security number was originally set up to give benefits to people at the end of their retirement. I think the social security number in the United States is a good example of a sort of a identifier slash data collection regime that was meant for a very normal or a very narrow set of circumstances that has really exploded into this, not a universal identifier in the United States, but it's definitely taken on a lot more prominence than what it was designed for. And because of that, the social security number is very unsecure and it has led to a lot of financial fraud. And so I think at least in the United States, that's a thing. I haven't heard of proposals that propose using a social security number or a social security card as the form of age verification, but if an age verification system is asking for a government ID, I guess that could be one of the methods used, but I don't think that that is... I don't think that's something that lawmakers are proposing. Cool. I think this next one will be for you, Paige. With the recent passing of the online safety bill, do we expect to see messaging apps and services withdraw from the UK? And what would that look like? That's a great question. Thank you for asking it, specifically because in the weeks leading up to the final vote of the online safety bill in the House of Lords, a number of messaging apps and services decided to publicly say that they would leave the UK market if the online safety bill passes, which indeed it now has. The rationale of those proclamations is that they don't want to undermine end-to-end encryption on their sites and on their services and platforms and therefore the cost of compliance is too high. And I think this point is interesting because there's a criminal liability introduced in the online safety act for non-compliance with the provisions under this bill. So the manager's meta, for example, or signal, if they don't comply with this bill, can be imprisoned. They will also be fined extensively for non-compliance, I think it's 14%. So it's very, very high amount of money for non-compliance of the annual turnover. And so if you're one of these services, and now we're talking about tier one services, so big services. If you're one of these companies and corporations, you've got the European Union where you have provisions and legislation and files to comply with. And then you have the UK online safety act, which almost contradicts many of those. And if you're working within all these countries, it's perhaps nonsensical to uphold the online safety act when you've got 27 other countries in the European Union and a stronger risk of non-compliance for with those pieces of legislation. So I think it's left to be determined that in private conversations with some of these services, they have told me, we're still very committed to upholding into an encryption. We don't want to leave. It's certainly the last, last option, but we still will if we can't reach a compromise on how the bill is implemented. Specifically on the technology that is supposed to be accredited to mandate client-side scanning. We've got a declaration in the week before the online safety bill passed that the government recognized that right now the technology doesn't exist to scan for child sexual abuse material or harmful content and not scan for everything else, which was a win because they'd refused to acknowledge that since 2017. But I think it's left to be seen. I hope that some of these corporations and indeed all of them hold their word and do leave the UK market. But of course, that 60 million people will be losing out. And it's not just people in the UK to people communicating with those in the UK. Some of these encrypted apps are used by people who are seeking asylum and they're in detrimental situations coming out and they need to communicate for a safe passage into the UK or for human rights defenders sharing information from one country at risk to maybe somebody in the UK. So it's not just people in the UK that were missing out. So I really hope that we can find a solution before these services happen with the United Kingdom. Thanks for that. This next one's really interesting and something I've thought about before from Grady. CSAM is such an emotional issue that it skews the entire framing of the issue. The issue is at the forefront to get people to accept surveillance that they otherwise wouldn't. How can this framing be challenged? Surely over blocking is also harmful to children. Did either of you guys have thoughts on challenging the framing for CSAM? Yeah, I don't know if I can talk, sorry. But we actually found this in the chat control and the reason was that later in the game, the EURO policy critically inserted their own phrasing, like one paragraph or so, where they basically said like any police can use this data. So you should probably look for something like this because I would bet it would be there as well. Okay, that's it. Paige, did you want to say something? Yeah, thank you so much, Andrew, for sharing that. And indeed, I think this is probably the biggest issue is tackling this narrative. This discourse is so pervasive, it's so paramount. None of us don't want children to be protected, but by eroding their rights online, they are at more risk. I think it comes from a number of different angles and I've seen in the chat some conversations about how can we fight against the discourse of I've got nothing to lose so I don't mind an invasion of my privacy rights. And I think it's about reorienting that framework to recognize that we do have these rights. So it's not about what's to lose, but I have these and it's something to gain. When we think about face surveillance, for example, it's face recognition as maybe one of the most lovely things ever, when you see somebody that you know after a long time on the street or if you're going home for the holidays and you see your family members and your friends, that's what face recognition should be used for. It shouldn't be used to scan our faces and surveillance without our consent. And I think we can take that same approach into messaging services. We have these rights to privacy so we can communicate with our loved ones the information that we want to and when we want to and we know when that's going to be taken and not just for us, but for the profits of big corporations. And so restructuring that and trying to tackle it from a different angle because I think it can get really tricky also. I think we've shown this many times going along the lines of, but we have the rights to privacy. It's like, I don't have anything to lose. Children need protections, they have rights to, they need anonymous channels. And I think what's been super frustrating about these bills, especially in the UK is that a lot of the advocates for the bill have also anonymous channels that children can report abuse or bullying. So they recognize that children do need anonymous channels to report the harms against them, but apparently not online. So there's that. I think, sorry, finally, there's a much bigger issue which is that just because a child experiences harm online doesn't mean they're not experiencing harm offline and whatever happens online is not in a vacuum. And so really building out frameworks for children to feel safe both online and offline in a broader community, holistic framework is also one approach to tackling that. I'm muted, sorry. Did you have anything to expand, Mario, or could Paige cover it all for you? Well, so yeah, the bills that I outlined are less about filtering and blocking CSAM, but there are proposals in the United States that in this filtering and blocking would also create liability or could create liability for end-to-end encrypted apps. Those are, I'm thinking of proposals like the EARNED Act or the STOP CSAM Act. And I think one of the answers to that is not to reframe the issue in that, you know, CSAM is despicable and it should be stopped. It's that law enforcement, at least in the United States, law enforcement has tools right now to address that. There are laws that create liability if tech companies don't report CSAM when they have actual knowledge. To my knowledge, that has never been a law that's enforced. There's also other laws in the book that sort of outlaw the promotion of that kind of material and that law is also not enforced. And so I think one answer is to ask why, you know, why these lawmakers are pushing these new proposals when they have enforcement tools. And one of the answers is for them to strengthen their enforcement of existing laws rather than creating new laws that create a lot of different problems and create a lot of uncertainty for end-to-end encrypted apps that like Paige said, offer a lot of value and protection in and of themselves. Cool, thank you. This next one comes from Ben. I'd be curious to get y'all's perspective on the practical enforceability of potential client site scanning slash end-to-end provisions. Thinking less about the mega tech companies and more about every project uploaded to GitHub that provides encryption and whatnot is the implication that every OSS, I think that means open source project going to pull from some mandatory repository. Do y'all have thoughts on that question? Maybe from the UK, do you want to get married? No, no, you go ahead. Okay, I think maybe taking the first part of the question about the enforceability of client site scanning, something that we've tried to emphasize in the UK and across the EU with stop chat control is that, you know, it's not possible to protect rights and implement client-side scanning without eroding those rights. It's just fundamentally incompatible to have privacy and erosions on end-to-end encryption. So it's not enforceable, it's a, and I think that's the thing that we've tried to emphasize in the UK specifically, which is that the technology that the government have consistently said over the last five, six years exists as possible. We can scan for CSAM and other harmful content, which is also really a subjectively defined group at least of topics and subject to change. We can scan for clients, we can scan for that and not scan for anything else. We can only use our technology and know whether technology is possible. I think what we've tried to emphasize at each stage of that is it's not possible to enforce that without opening it up for everything else. So you have a back door for the government with their credit technology, you're then opening it up for hackers, for rogue states, for harmful actors to expose and take advantage of that and in doing so, it therefore renders privacy rights unenforceable. But I don't know, Mario, if you have a different elucidations on that. No, I'll leave it there. I think that we have time for a few more questions. This is one that's been on my mind that I think you could answer, Mario. You talked a bit about like a more consumer privacy laws as a way to fix some of these issues. So a lot of states, including California, already have privacy laws. Would a federal law replace these state laws or how would that work? No, no. So what EFF has advocated for a long time, one of the key pillars of a federal data protection law would be that it does not override state laws. And so that along with the consumer enforcement provision has been a large sticking point, but we think any federal privacy law should be sort of the floor of privacy in the United States and not the ceiling. And so if California, which has been a leader in privacy legislation in the United States wanted to increase those privacy protections, we think that's a good idea. I think that's going back to sort of the purpose of the state and federal government, states have always been seen as sort of a laboratory for different laws, different and improved laws and those sort of filter up to the federal level. And so I think any privacy law that gets enacted at the federal level, we would push very hard to make sure that that's the floor and not the absolute sort of ceiling of what privacy protections in the United States look like. Cool. Thank you. And for Paige, how could the online safety bill affect other social media and child safety efforts in Europe? So the online safety act has age verification provisions. So like Mario explained, I won't duplicate because it's almost exactly the same. There are provisions within this bill that are trying to prevent children from seeing content. That's for example, pornographic, but again, harmful, which is pretty subjective. And it's requiring blocking children from viewing these websites and sites. So not only is it trying to scan for content, but it's blocking children or those underage of using certain sites. And I think children are very innovative. It's not great just having these builders not going to prevent children from viewing these platforms. And it's an incorrect solution to a problem that we've been trying to communicate for a long time now. So what happens in the UK, we've mentioned many times, I think it's probably going to be a blueprint for similar bills around the world given that it's almost the first, it's kind of passed in such a way which has such erosions of encryption through client-side scanning. And probably we'll see duplications or at least now a commitment from other countries to echo that. I've heard that whilst I've spoken to governments around the world and the UK have this pass in the UK. So it's kind of emboldening similar legislation, but in that sense, we just continue to fight back. And as we have this win today, a broad coalition of organizations in the European Union have been really fighting for the CSR regulation. So not all hope is lost. And I think we can try and replicate those strategies elsewhere and really make sure that the children that can access, have device to access the online world are safe, but in the way that we know best and not by preventing them from accessing certain websites and apps and by eroding their right to safe and secure channels of communication. Okay, I mean, I will just quickly ask since I lost like, where we're in the, but I can tell you like in a very short five minutes, very short how we went to the EU Parliament with Ediri and how it worked and maybe the reason how we won. So maybe something you could re-implement or use for your environment. If you want, I don't know if it's this, if this means, if this means is until eight or how much time do we have? I think we were close to wrapping up, but I don't know if Paige and Mario wanted to hear or if you wanted to like shoot us a note at info.eff.org. Everyone here can shoot us a note there and we go through all the emails and check those too. And I think you can also go on Scops Stop Scanning Us or you go on the Ediri website at European Digital Rights as just being mentioned and you can find the campaign there called Stop Scanning Us, Scops Scanning You. And you can find out that given the news today and the win today in the European Union, you'll probably see lots on the EFF website in the coming days and different social media channels as well to hear that. But please do follow up on that because it's a really good win in a landscape of pretty dire legislation. So we want to amplify that and share that this legislation and these types of legislation can be challenged and we can win and we want to replicate that in as many places as possible. Cool. And so with that, I think it's time to close out. So thank you all for joining again and thank you Paige and Mario for presenting, answering some questions. It was really great to hear from y'all. And also just another reminder, thank you all for joining and for being EFF supporters. Like I said at the start, the reason we're able to continue this work is because of your continued support. And if you're here and you haven't donated yet this year, we'd love to have you stay on as an EFF member. You can donate at EFF.org slash join. And if you have any questions for us, please send a note after at info at EFF.org. But until then, thank you guys so much and we'll see you at the next one.