 Good afternoon, ladies and gentlemen. Thank you very much for joining us here on what promises to be a meaningful conversation on what we can do to ensure that we protect those who are vulnerable online, what the role of the regulators will be, what the role of governments will be, what the role of the private sector will be in, of course, civil society, and whether we can bring in much more collaboration, much more partnership in order for us to be able to protect the vulnerable online. You know, we live in an era, if you walk down the promenade, everyone's talking about AI, everyone's talking about digitization, and as we get more and more technology into our lives, just about each one of us finds ourselves vulnerable in some way or the other. There is, of course, also the balance between freedom of speech, privacy, and what is acceptable behavior online, and these pose challenges. The world over different jurisdictions are looking at it differently at this point in time. Regulators are coming at it from a different perspective. There's also a need perhaps for more harmonization of rules and regulations as well, and that is what the private sector is hoping for, but we also need to look at local context of what is going on, and that is what we intend to do with our panel here this afternoon. Just a quick take on the risk report, the World Economic Forum puts out a global risk report, and that highlights that disinformation and misinformation is likely to be the number one global risk. If you look at the two-year horizon starting now, and with the advent of AI, we're talking about deep fakes and so on and so forth, it's going to be something that we need to be very, very watchful and mindful of. Let me introduce our panel here this afternoon, Julie Grant, E-Safety Commissioner at the Office of the E-Safety Commissioner of Australia. Morris Levy, the Chairman of the Supervisory Board of Publisives Group, our community of chairpersons, Helina Laura, Director-General of Consumers International of Switzerland and Meredith Rittica, President of Signal joins us from America. Thank you very much for joining us here today. Let me start by asking you about my very first question. I mean, vulnerabilities of different kinds. It's across demographic, it's across race, it's across jurisdictions. We need to look at the local context, but we also need to look at harmonization of rules and regulations. As a regulator, what is the biggest challenge that you face today? Well, I think it is a challenge because the internet is global and laws tend to be national and local, and you have to be understand your local context. We're moving a little bit away from referring to vulnerabilities online, but we can very much see marginalized communities that are particularly targeted online. And what's really important to understand is the way that the harm or the abuse manifests differently. So I'd also like to start by saying as a regulator, I see my job is also helping to harness the benefits of the technology and making sure that our citizens are armed in terms of how to protect themselves, but also to reap those benefits. But just to give you an example, indigenous youth in Australia are much more likely to be creative, connect, be engaged in current affairs online, but they're three times more likely to receive online hate. If you are a young person in Australia who identifies as LGBTQI, you're twice as likely to receive online hate. And again, those with a disability, one in four have actually received violence online for having a disability. Yet at the same time, they say they feel like they can be more themselves. So again, how do you get that balance right and make sure that you're looking out for those that are the most vulnerable to attack and that you're working with the platform so that they understand those nuances? Because a content moderator in the United States is not going to understand the cultural context. That's particular to Australia for us. Very important point that you make about the context, the local context versus the global context as well. But what are the interventions that you've made as a regulator to try and ensure that you do make online spaces safer for those who find themselves most vulnerable? Well, we talk about the 3-P model. We start with prevention and education and research so that we understand who's being harmed, how they manifest. We use our trends data from our complaint schemes, which I'll talk about. We have, in protection, we have a broad set of regulatory powers, including complaint schemes where, for instance, if a child is seriously threatened to harass, humiliated or intimidated online and they report to a social media site and it doesn't come down, we service that safety net and they can come to us, report to us. That triggers an investigation. We need to make sure it reaches a threshold. But this is where the collaboration with the platforms happen because we can say to Instagram or Snap, we, this violates, this meets our threshold, but it also violates your terms of service. Do you want to take it down voluntarily? In 90% of cases they do, and in most cases, young people just want the content taken down. And we know that the more quickly we do that, the more that reduces the mental distress. So we have a scheme for image-based abuse, which also covers sexual extortion and deepfakes. And we have remedial powers so we can target both platforms and perpetrators, because we have to remember it's the bad actors that are weaponizing this technology against people. Also, child sexual abuse material, terrorist content, adult cyber abuse, and then some systemic powers. And I'll finish by saying we also have to be looking forward and place responsibility back on the platforms themselves through safety by design. Just like we expect in cars, they embed seat belts. We've got airbags and anti-lock brakes. We need virtual seat belts and digital guard rails erected at the beginning of the process, not after the harm has been done. But that's a nice one. Mary, let me come to you with that. You know, technology presents us with the problem, but technology will perhaps present us with the solution as well. As you look at some of the issues that have been raised here this afternoon, the role that platforms, the role that tech companies will play in collaboration with regulators, what's the path forward? Because clearly self-regulation is not enough any longer. Well, you know, that's a tricky question in part because I've watched the lobbying dollars flow into DC and Brussels. And it's not clear that what we're talking about is collaboration so much as influence. So when I look at a lot of the accountability regimes that are being put forward, you know, oftentimes what I see is regulators are left to clean up with the collateral consequences of a very toxic business model. So I have been online since I was a young person. I'm not that young anymore. And I remember back in the day, you had use net message groups. These are sort of nerdy kind of online communities. They were self-monitored or self-moderated. You had Google groups for different interests. I had blogging comment sections that I was active in. And all of those always needed some community moderation, right? I've never seen a platform that doesn't need some standards or some way of ensuring that, you know, that guy who posts 50 paragraphs about his pet project doesn't get to interrupt everyone every time or the person who is, you know, harassing or horrible is not permitted free reign. However, the issue that I see is we have a surveillance-driven business model based on surveillance advertising that has grown these little autonomous groups into a global one-size-fits-all set of platforms that are trying to boil the ocean around standards that cannot be generalized. And so what you get then is a, you know, what I would say is a political issue where different opposing-sized use issues of moderation, issues of, you know, online harm, basically to try to push different agendas. And so, you know, I'm from the United States, so this is not theoretical for me. Right now we have attorneys general in Florida deciding what harmful content is and is not. They're including gender-affirming care. They're including access to LGBTQ resources. So, you know, we're in a scenario where that one-size-fits-all model in a politically volatile election year is something that really frightens me. And again, I wanna turn our gaze back to the platforms and to this business model. So we're looking at surveillance advertising as a practice. We're looking at algorithmic amplification and engagement-driven business models not simply trying to clean up and taxonomize the content that spills off of these because I think we need to attack this problem at the root and I'll just close by saying, you know, this is why I'm so proud to work at Signal because we are a nonprofit. We do not have engagement-driven anything. We're an interpersonal communications app and we go out of our way to avoid surveillance and to keep vulnerable people who might be subject to harmful surveillance by a, you know, the Iranian regime, you know, you name it, safe so that they can communicate and they can, you know, live their digital lives without that type of oppression. You know, important points that you make and Helina, let me come to you with this because these are complex issues and again, as Meredith pointed out, there are no easy answers or no easy solutions but it comes back to accountability. Who do you hold accountable? Are the tech platforms to be held accountable and who will hold them accountable in what jurisdiction? How are you looking at these issues today? We're an NGO in 100 countries for consumer rights, which probably tells you how mainstream this is now, right? In countries around the world, we're in 100. Many of those consumer rights groups are banding together with digital rights, with green, with representation for vulnerable communities because this is now something for us all. What's interesting is consumer law tends to treat everybody. It treats everybody as the average, rational, reasonable person, right? But there's increasingly a wave of saying we are all vulnerable at some point or another. So at least even the mainstream of consumer protection law is starting to think, well, actually, this is situational as well as for specific groups. We would... Now, this may sound difficult, but we cannot hold the individual accountable. We cannot, you know, be aware, be aware. Yes, one always has individual rights, but actually beyond that, we hold businesses, government, across the piece to be accountable for this because each one can take action on a different piece of this from information to individuals so that we can understand better, so that we can measure, so that we can then put in place the playing field that helps us look at our data and the systems and the flows behind it. You know, let's take... This is about... There are discussions going on about data flows and how that works around the world at the G7. It does not take into account redress. How do... You know, if you were to build trust, it's not just about the transparency at the front, it's when things go wrong. How do I get redress? So the answer is there is a broad range that we can... Each one has a role. And let's make this a consumer protection issue broadly beyond protecting specific groups formally considered as vulnerable. No, I agree with you on that. And I just wanted to understand from you because, you know, you said that you're working across 100 different countries at this point in time. Are you seeing a sense of urgency to be able to address this issue today? Absolutely. I mean, I think... Well, there is urgency from rights groups that have tried for decades to get this onto the agenda. You are seeing for consumers, I think, an increasing recognition that we are in a space that is, you know, I can... Now, here I'm not talking about a traditionally group with vulnerabilities, but I... You know, marketing studies that say, well, women are feel, you know, more... That's sort of less attractive on Monday morning, so you're sent ads. You know, I have gambling issues. I may receive ads for casinos, et cetera, et cetera. You know, these are our traditional marketplace where we had things that helped make sure that we are protected are not in place, and it is invisible. We can't see what is going on as... So, yes, for consumers, we are seeing increased concern but also increased anxiety because it's not clear what we as individuals can do anymore. Hence, back to... Let's look at those bigger changes. Yeah, there is an asymmetry of power as well. Morris Levy, let me come to you now. And, you know, you've been listening into many of the specific as well as the general issues that we brought up. What concerns you the most about the power asymmetry that we're seeing, the influence that Meredith spoke of, and the need to protect not just the vulnerable, but protect virtually everybody who is online today. But they think that everyone is vulnerable online. You have not a specific group. There are some who are more vulnerable than some others, but as soon as you are online, you have immediately a risk. Can be a risk for a corporation, ransom, et cetera, or a hacker or whatever, can be a risk for kids. And we have seen how many aggression and harassment and fake things which has led to really some very dramatic issues. And it can be also for the average people. So when you are online, it is more complicated than crossing a street in New York or in Paris at the worst time of the day. You risk every single moment to be a victim. That's something which has to be noted for everyone. The day you open an account, the day you answer an email, you have some risk. Then there is obviously the fact that some corporate, some platforms are moving in the direction of more moderation, mainly based on algorithm and AI. And sometimes people who are looking at some messages and sometimes it is something which is so horrible that we know that it has been very difficult for some people in the Philippines and elsewhere by spending eight hours in the day to see violence. They end up by resigning because it's really very tough. And there is also something which we have to take into account, which is the complexity. We are in a world which is global, and everyone has access to internet. And the complexity of those is such that it is extremely difficult to have something which is once I speak, oh, it's impossible. Take one single example, which is freedom of speech. Freedom of speech, as we have seen with three fantastic ladies, who are, for some of them, in charge of the most important thing on earth, which is education, they have been incapable of explaining what should be their attitude if somebody was calling for genocide. And they were depending on the context because of freedom of speech, which has to be above everything. And if you go to a country like France, you have borders. You have limitation. There are things that you can't do. The freedom of someone stops when it can enter in the freedom of somebody else. So there are hatred, racism, anti-Semitism, which are forbidden by law. And therefore, people have boundaries. And if they are calling for terror or asking for a terrorist act, they can be prosecuted. And that is something which is a huge difference compared to the US, where freedom of speech is such and it is monumental, and I respect that, that it is almost impossible to put some limitations. So therefore, when you have to address the issue and you have the platform, which is in the US, and somebody who is calling for an assassination, say, oh, no, this is freedom of speech. And you cannot stop that. So it is a territory which is extremely difficult to moderate, to control. And I will tell you one anecdote, which is 2011. There was a G8 in Deauville. And I have been asked to organize what they call the EG8, which is to prepare some guidelines for the G8. And we gathered in Paris all the people from the people who were clearly against. And the leaders of the platform, Mark Zuckerberg, was a kid at that time, Eric Schmidt, name it. Everyone was there. And we decided to present some aspect of regulation, just a framework. And we were in front of Sarkozy, Obama, Merkel, the eight head of states. And when we presented, Sarkozy said, this makes sense. And what is your reaction? The first to react was Angela Merkel. And she said, I have 5% of my population, which is against everything. But that 5% peaked so loudly that the 95 silent majority remained silent. So I'm not going to take any step in regulation to control what's going on on the internet, because this 5% will speak much more than the 90%. And the discussion on just a small framework of regulation, some limitation, protecting the kids, protecting the people who are vulnerable, it's something which has not been adopted. And I remember Obama saying, in our country, we don't need any regulation, because we have a process and self-regulation. And people are always limiting themselves the risk. And therefore, we don't need any regulation. And Europe has decided, after that, to work on something which is GDPR. So this is just the context. So if you want to navigate that world, it's risky waters. And you have to navigate very carefully. So when you look from an advertising standpoint, for example, when we are, again, you have to understand that when we are asking for an audience and to say, OK, we want to have 2 million people who are of that center of interest, that profile, we are not going to pick that 2 million ourselves. We go through the tools of the platforms. And most of the time, this is a black box. And most of the time, this is something that we ourselves have difficulty to enter in. And obviously, we have made a lot of actions in order to find ways with the platform. And we have regular discussions because we need a safe environment. We need to protect the consumers. We need to protect the people. We need to make sure that we are not doing any harm and that we are an environment which is absolutely safe. And this is really a mission for us. I understand that. And there's many things that you brought up there, which we need to unpack. But, you know, Judy, I want to pick up on the point that was made there, that just like in the offline world, you have very clear rules and regulations in place. The need now is to do the same for the online world. Who draws those boundaries? Are those boundaries being drawn rigid enough today? Has the self-regulation bus moved across the station and needs to be sort of reined back now? As somebody who worked in private tech and is now a regulator, you perhaps understand some of the motivations of tech companies as well. But as a regulator today, do you believe that there needs to be well-defined, well-articulated boundaries that are put in place by regulators, by governments, keeping their local context in mind? Well, absolutely. And I mean, let's go back to 2014 when I was actually interviewing to open Twitter's ANZ in Southeast Asia. They're trusting safety as well as their public policy and philanthropy programs. A public figure tragically took her life after being terribly trolled on Twitter, and that resulted in a petition that was signed by hundreds of dozens of people that went to the government that said, enough is enough. We have to draw a line when online discourse moves into the area of yours, into the lane of online harm. Government should take action. And so our ICT minister at the time eventually became RPM, but he was also a technology entrepreneur and a barrister. And he said, yep, freedom of expression is going to be an issue, but nobody can argue that children are not more vulnerable. And this is where we set up the, we already had the online content scheme that deals with child sexual abuse material, where the hotline in Australia. So that we created this cyberbullying scheme. And when you really kind of break it down, I think this echoes a lot of what everyone else has said. We have a really imperfect system. Right now, we have these platforms that set their own rules, but they're not consistently enforcing them. And we can report, and the average person doesn't have any form of recourse if the decision isn't taken. And you do have a lot of this outsourced. Volumes are high. The other recourse that people have in other countries is to go to the police. Well, there's often not a lot of trust in law enforcement. They're not resource to do this type of stuff. They're often told to go off the internet. And so the third sort of dependency is the company building the tools, the self-empowerment tools that are needed to report or conversation controls. Those aren't perfect as well. So that's why they decided to set up a regulator as that safety net, who was also coordinating all online safety activity and providing the education as well. That eventually evolved into covering adults as well. So it's a very bipartisan issue. So unlike what we see in Congress, and the discourse is often about censoring certain kinds of speech, the parliament decided that, again, we are going to draw a line. We have guidelines. We have to be transparent. Every decision that I make requires an investigation. It has to stand up in a court of law. It has to stand up to scrutiny to not just the court. There's an ombudsman. But there's going to always be subjectivity. But we felt that we had to do something. And it was a very lonely road, because in 2015, we were the only online harms regulator in the world, kind of like going up the peloton with nobody drafting behind you. AI in 2023 changed all that. Now the UK, OFCOM, has online safety powers. Ireland has an online safety commissioner. And we've got the DSA across Europe. We've started something called the Global Online Safety Regulators Network, so that we can work together. And while political decisions will and cultural context and history will determine what is considered safe or what is considered harm in any given country, if we can build a degree of regulatory coherence, nobody wants to see a splinter net of regulations, which will be very challenging for the companies to be able to comply with. So we're starting those efforts now. But we are starting from behind. So we as governments have to work together to counter the wealth, the stealth, and frankly, the power of all these technology companies. And we need to work together. To really force them to do better, to prioritize human rights, to prioritize safety, to put the humans at the center. So, Meredith, yes, yes. On that point, there is something extremely important which happened in Davos some eight, nine years ago. There was some, we decided the five top global agencies with some 30 top advertisers in a meeting here in Davos. We asked Facebook, we asked YouTube, Google to participate. And there has been some rules which have been extremely strongly made by those, the advertisers and the agencies. We don't want our ads to be next to violence, terror, etc. And we implemented such strong rules, the PNG, the Loreal of the world, that YouTube has been forced to create moderation which they had not before. And to clean up the content. And this happened also long before Cambridge Analytica with Facebook. So, advertisers want a safe environment, they don't want to put the money where it may have a negative consequence and they want to be next to a content which is acceptable for all audiences. No, and we're seeing that play out even currently as far as X is concerned. But Meredith, coming back to you now to talk about the road ahead. You know, what would you like to prioritize in terms of setting boundaries, in terms of holding people to account? And in terms of, you know, regulation is always behind innovation and the pace of innovation is just faster and faster as especially in the world of AI. So, what do you believe are the deficits that need to be bridged at this point in time? Yeah, well, I want to push back on that framing because I actually don't think innovation is a synonym for decisions made by companies that commercialize network technology, right? Regulation can be innovative. You know, the way we approach these issues can be deeply innovative. We can innovate a new business model that doesn't rely on surveillance advertising which is effectively the dynamics that you're describing and, you know, frankly, the reason that your industry has so much leverage over the tech companies where very few others have any leverage. So, I think we need to think about these paradigms and I do think we have some traditional tools in the toolbox, right? Privacy is one. A very moderate reading of GDPR could easily be used to ban surveillance advertising. That's not controversial. What is controversial is enforcing that when, you know, you're dealing with the US government and US-based companies that now have a monopoly on this industry. So, you know, if we were to ban surveillance advertising, what are the collateral consequences here? Well, the engagement-based viral conducive news feed that is what we're kind of calling a platform here would no longer be feasible. The desire to suck up all of the data about our lives, about our communications, about our partners, about our transactions, about our locations would no longer have such a strong financial incentive. We would actually be completely shifting this ecosystem so the rage bait, the kind of find-a-friend algorithms, all of these things that make these platforms so sticky and frankly are sort of the heart of so much of the harms here would no longer be incentivized structurally. So, I think, to me, that should be the target of these very legitimate calls for accountability. We do need big tech accountability and we've needed it for 20 years since the 90s when the Clinton administration threw off the shackles and said, surveillance is permissible if it's done by the private sector. And in the U.S., we still don't have a federal privacy law. So, I think we need to go back in a tone for those sins, but we cannot hijack calls for accountability that are so legitimate that I lost my job at Google over in the name of expanding the surveillance powers of these platforms. Giving government more surveillance is not necessarily going to address these harms. My big concern as signal are the calls to scan encrypted content, which effectively nullifies the only technological tool we have to preserve the norm of private human communication in a world that is inoculated with mass surveillance generally led by a handful of companies based in the U.S. and followed by China and the startups around that. So, I think we need to be really cautious, particularly now. And I speak of someone coming from the U.S. where we're seeing government powers be used in over these tech companies, sometimes in collaborations with these tech companies to crack down on what was formally legal behavior. And I want to give one example before I close this comment. Facebook, famously problematic platform to be diplomatic about it, turned over messages on Facebook Messenger between a mother and daughter who were based in Nebraska and they were talking to each other over Facebook Messenger about accessing recently criminalized reproductive healthcare in a state that had made it illegal after the Supreme Court ruling. They're now serving two years in prison. And those messages were the evidence that were presented as sort of the basis for criminalizing them. So this is like, I'm feeling, I'm slow down talking because it's, you know, the specter of this one company with dossiers that are beyond the imagination of the most authoritarian intelligence regimes being turned over to an authoritarian regime, being turned over to the attorneys generals in Florida that want to ban queer children from accessing peer support and resources is frankly terrifying to me. So I want those platforms to go away. I want them to be held strictly accountable. I do not want a bait and switch where accountability becomes simply expanding their powers to new actors. Helena, you know, on the issue of privacy and I think Meredith made a very important point there that we need to ensure that we have privacy, we have data protection laws in place in many countries, including where I come from, India. You know, we very recently passed data privacy and protection legislation. Again, are you seeing an active engagement between civil society and governments in the countries that you deal with where there is a move towards ensuring that legislation is in place, to have legislation in place and then enforces it a separate matter but to have the legislation in place itself to start with? We'd seen only a 60% of countries have any form of legislation around online safety so we're seeing greater consumer need for privacy, certainly. You know, when we started out, privacy and the free flow of data is actually a consumer need recognized at the United Nations. It should be sort of part of every single country's consumer policy as a basis. And, you know, the Norwegian Consumer Council has done great work looking at how you can make this then work without surveillance advertising. Yet, there's even work now looking at personalized pricing. So taking, you know, the sort of consumer side of this, we've seen how you can be charged four to six times more for the same, you know, there are many, many examples where it is so difficult for us to uncover because that rule is not put in place. So I think you're seeing a growing concern from consumers, certainly from our consumer advocates in the network, not necessarily as fast action as you would like. Hence the sort of need to then, you know, band together even further. So I think you're just going to see greater unease. Great, greater unease. And I would expect that that is going to be the situation that we find ourselves in. The priority is for you now on the back of what you've already been able to do. What has worked? What will the priorities be now? Right, well, we're working in multiple areas, so we're looking at different harm types which are proliferating. Sexual extortion is one that we've seen a tripling of. We deal with that through our image-based abuse scheme and that's where we remove intimate images and content from the web at the request of the Australian who is the subject of the image or the video. This also covers deep fakes and we've just taken action against an Australian who is making deep fake intimate images of prominent women, women in parliament mostly. There was an injunction placed against him. He broke that and he's now in jail. That's definitely an area of focus and sexual extortion is really being driven by organized crime but we've done a lot of work. We've seen how it's manifesting. It's generally targeting young people, young men between the ages of 18 and 24 on platforms like Snap and Instagram. We've got a very clear idea of offender methodologies. We know they use fake and imposter accounts and you can find thousands of the same images. This is where the safety by design comes in. We've talked to these platforms about what they need to do to harden their resources so that they're not colonizing these organized criminals to target young people. We need to get them to start taking action otherwise we're playing a big game of whack-a-mole. We're also very focused on transparency and we've used our transparency powers to really find out what's happening under the hood. What technologies are you using? What are you doing or not doing to scan for child sexual abuse or tech terrorist content? We've just issued one against X Corp around online hate where we were able to really get a sense of the extent to which they cut their safety engineers by 80%, their content moderators by 30%, their public policy people by 70% and then they enabled previously suspended users. So it's like Volvo firing their designers, their engineers and then not letting the traffic infraction people and the ambulance in while putting all these dangerous drivers back on the road, you're creating a perfect storm for online hate. So there are lots of different tools in the toolbox that we'll be using differently but ultimately the aim is transparency to achieve accountability and to get companies to raise their safety standards so people can have safer, more positive experiences online. I think that is the bottom line. I'm gonna open it up for questions but there are any, if you could raise your hands we'll get a microphone across to you and get the panel to address them. Anyone here with questions? Yeah, I think we've got, the panel's done a great job answering many of those questions but you know, we've got about five minutes on the clock before we close. Mary, I'm gonna start by asking you what is the thing that you fear the most as we look at where we find ourselves today? The need for accountability, the need for collaboration, the need for regulation, what do you fear the most in this situation? Well, I mean, I think I fear malformed regulation. I was very concerned about some clauses in the online safety bill now act in the UK which has some very good things. You know, researcher access, great. Again, you know, it was threatening and it's still being worked through to give power to their regulator that could have been used, could be used still. It's in the consultation phase to mandate mass scanning of end to end encrypted messaging. And so of course, what does Signal do? Signal is the world's most widely used private messaging service. We use end to end encryption which is this sort of set of technologies that is very binary, right? It either works or it doesn't. And we only have a handful of very robust encryption systems that actually work to mathematically keep data private online. So either it works or if it's undermined if you bolt on a scanning system in front that creates a significant vulnerability and access to data that people mean to keep private or if you adulterate the encryption sort of creating a backdoor of some kind. This is a longstanding expert consensus that you simply can't do that without fundamentally and disastrously undermining the one technology we have to ensure meaningful privacy. Meaning Signal can't see your messages. Hackers can't see your messages. Hostile nation states can't see your messages. No one can see your messages if it's really private. If you do bolt on scanning, if you do bolt on a backdoor, if you use one of these sort of mechanisms you've actually eviscerated privacy and enabled all of those actors access. This is one of those cases where there's no magic wand for the good guys that we can keep secret from the bad guys and it's really important that we recognize this because if we undermine these technologies we have basically sort of given up to the surveillance business model and to the companies in partnerships with whatever regimes they decide to partner with around sort of the use and capabilities for mass surveillance, particularly of interpersonal communications. Well, I think the concern that Meredith is raising there is that perhaps one of the unintended consequences of taking away power from the tech companies will be more power given to the government and then how governments choose to use that power. But again, the citizen isn't at the heart of these conversations and that seems to be the problem and we're not being able to address that issue realistically in any fashion at this point in time. It's about regulating tech or it's about giving the power to the government but the citizen is not at the heart of any of these conversations. Helena, final comments from you. I think you can put the citizen into those conversations. So a lot of what we would talk about is representation. That can be done in a variety of different ways. The involving and allying consumer organizations with for example enforcement, we're doing a bit of work on scams which is now like one trillion in value. It's 1% of GDP, not talked about in Davos. It's like 56% of people don't bother reporting because they don't think anything will happen from it. I mean, it's crazy that we've let ourselves get to that point. You can, there are groups that can bring the citizen in but it is beyond one individual. You need the enforcement agencies to be acting on this at this point. The bit I think the other way in which you can bring people in though is around this point about innovation. We would look at some of the sectors that have to take what we're talking about here and make this work for people as we look to a greener, more sustainable future. If you think about this then also applied to digital finance, to energy. We need the business models that are gonna work in the marketplace of the future. So it's not just this conversation, it is across every single sector that this works. So I don't have, you know, individuals, yes, we can be aware, we can learn. Of course, we're going to learn. Of course, we're gonna, you need that representation and it matters for our future and for trust in general, the entire piece. Amar Slevy, the final comment from you. I think that the platform digital internet didn't come from Mars. It has been created by human beings and they are what they are. They are the good one, they are the bad ones. They are the one who are taking maximum advantage of the system and they are protecting the people. They are the one who want to interfere in the election and they are the one who want to fight against that. So what we need to do is to use all the possible powers, the economical power, which is probably the most important one, the regulators, and trying to bring all the people as much as we can in a channel which is a safe one. Julie. Oh, I thought Maurice had the last word. No. It said zero. Oh, it doesn't say zero. Very quickly then, 10 seconds. And the question was? The road ahead. The road ahead, if we can get companies to be thinking about safety by design. So as a regulatory enablement strategy, so we don't have to be playing a game of whack-a-mole. I think that when we apply those types of standards to food safety, to consumer products, why should there be technical exceptionalism when it's become an essential utility and people deserve to have safe, positive, private experiences online? Thank you very much. It is, thank you for joining us here this afternoon. There are many important questions that will play themselves out across different geographies. And I know that this is something that tech companies are dealing with. These are things that governments and regulators are dealing with. But I think at the heart of it, as we've all concluded, the citizen must be at the heart of the conversation, must be at the heart of the solution. And having privacy regulations in place that actually work for the benefits of the citizen, that is something that we need to be mindful of. Many, many thanks for joining us here this afternoon. Appreciate your time. Thank you very much, ladies and gentlemen, for joining us here this afternoon. Thank you.