 It's great to have you here at Slush today. So you and I talked a few months ago, and since the last time we talked, I mean, and I'm sorry to kind of like start with Facebook, but I think that it is a really interesting kind of conversation to have around it because of the reporting that the Wall Street Journal's done recently and the release of the information that showed that Instagram was well aware of the damaging it was doing to teenagers regarding their mental health. So we're having congressional hearings. We have strong words from politicians in the EU. We have promises from Facebook and others that they will change, but very little has actually changed in the last, you know, ever since we've been having this conversation. So I guess my initial question for you is, why is it so hard for lawmakers to get their arms around this real important challenge of our times? Right, so there's a lot there. And first of all, thanks for having me. It's great to be at Slush for the first time. In some ways, we talk about Facebook too much, I think, because there's a systemic problem with a lack of access to information about what happens inside of companies that could be harmful and how we must understand it as societies, not just as CEOs and engineers. But what we've learned now causes sort of new momentum, particularly in the United States. And I think if we compare the United States Congress and the European lawmakers, there's a world of differences. So if you say that, you know, why is it so hard to come to grips? The EU is actually plowing ahead with, you know, proposing proactive rules on content moderation responsibilities for platforms on more clarity about their role as gatekeepers, so clarifications of competition and antitrust rules, rules around political advertising, artificial intelligence, data governance. So actually a lot is in the pipeline there. I think the big question is, can the US move, particularly as it is so deeply polarized as it is? And what would be your view on that? Is it able to actually sort of really make progress in this area, or is this stuff fairly low priority for the administration? Well, I think there's a political side and then there's the legal side. And so when you look at protecting children, when you look at lying to advertisers, when you look at human trafficking, when you look at all kinds of misleading information or promising and not keeping those promises as a company, I'm pretty sure that the courts could investigate some of the behavior. The SEC, for example, could do an investigation. But then there are also areas where clearly there need to be guidelines. And I think particularly that question of access to information so that we don't have to rely on whistleblowers, on leaks, on incident reporting, but that we can get a more systematic sense of what is working, what is not working, when there are harmful consequences, whether data is processed with respect to rights protections and so on and so forth. It shouldn't be scandal to scandal to scandal, hearing to hearing to hearing. There needs to be a systematic solution which should also lead to more predictability for the companies. And so clearly the era of self-regulation is over. Do you have a view on the Facebook oversight board? Has it been effective in any way? Or is this just window dressing and gives a kind of a heat shield for the executives at the top of the company? Well, so if we conclude that self-regulation hasn't worked and I think that's a safe conclusion to draw by now, then you have to wonder what a Facebook oversight board is going to add. Facebook funded, Facebook selected, but more importantly, a Facebook pipeline of cases that they can deal with and other issues that they cannot deal with. So I have a lot of respect for the people who are on the board. A lot of them are excellent intellectuals and civil society leaders. But I think as a governance construction, it's a distraction of what Congress needs to do, what kind of truly independent oversight is needed. And so whatever you may think of the oversight board, I'm on the sort of counter board which we call the real Facebook oversight board, which is kind of a pun to say that there should be actual independence. But whatever you may think of the oversight board, at least make sure it's not a distraction of democratic legislation and oversight that is desperately needed, not just for Facebook. I mean, YouTube gets away with so much harm and damage without ever being in the headlines. I don't think the CEO of YouTube has ever testified before Congress. So I would much prefer a broader view than it's kind of the flavor of the day right now to go after Facebook too. So you confident that we're now moving into a time when it's not going to be the boardrooms that are deciding what the limits of free speech are but actually legislators, lawmakers? So speech is one aspect and it's immensely important for freedoms, for the rule of law, for all kinds of other rights protections. But it's certainly not the only problem or right that is at stake when we look at the online environment. So what about freedom from discrimination? What about competition? What about public health? When does the amount and the effect of disinformation bleed into a public effect to the extent that people are dying because they believe that the vaccine is the poison instead of the solution? And that kind of trajectory, when it scales, when it goes from a small group of people to a significant percentage of the population, you cannot only focus on the need to protect speech. You cannot only speak of speech when there's data harvested, when there's micro-targeting of ads, when there's data brokered, you know, collected and sold and assembled and repurposed. So I think we've had a little bit of a speech dogmatism, particularly in the United States because of its First Amendment of the Constitution, a very strong tradition, but a lot of the issues and harms with social media are not just speech issues. There are rights issues, there are data governance issues, there are oversight issues, and all of those deserve the same attention. So if we determine that this commercially amplified content is causing damage to society, to individuals, how do we set about drawing the line? Like, at what point do we say, actually, this is damaging society in ways that are not tolerable and are undermining, you know, democratic rule of law and social order? Well, so I think there need to be a couple of baseline provisions that also have to do with manipulation that can be glanced from the platform. So for example, if you have an account uploading an allegation of a crime every three seconds for 72 hours, you can be pretty sure it's not a person. Right. And so besides looking at what is posted, you can look at who is posting it and the kind of behavior. So for example, pattern recognition or, you know, the creation of, I don't know, 5,000 accounts within 48 hours from a specific district must be suspicious. So I think there can be enforcement of platform-made rules about what authenticity of accounts might mean, what kind of behavior is tolerated. But then we also truly need to understand better what the business models do as an effect, what machine learning may cause as intended and unintended consequences. Because, you know, what strikes me is that we often hear these promises by the platform companies. For example, they, after 14 years, Facebook said we're not gonna allow Holocaust denial anymore. Long overdue, but at last it happened. Even after that commitment was made, Holocaust denial content would pop up over and over and over again. On Amazon two weeks ago, the top one recommendation was an anti-vaxxer book. How is this possible in the middle of a global pandemic? I can't answer that. You know, I don't have the full view. My colleagues at Stanford don't have the full view. But the fact that we need independent research to understand what is happening, whether promises are kept, and what could be the unintended consequences is essential. And only then I think we can make better decisions about what needs to happen next. So that's a great question. There's still kind of, you know, the black box. We don't know what's inside it. Like how do we get to a point where civil society, where journalists, watchdogs, researchers, academics get access to this. Because we hear from, you know, people in technology companies constantly, oh, politicians and these kinds of people, they don't understand technology. They don't understand what we do whilst at the same time not giving access. Clearly there's intention there. But I guess the question is like, how do we ensure that, you know, actors on behalf of civil society understand what is actually going on with some of these algorithms? Yeah, so I think we need provisions, mandates for access to information that may differ. So, you know, the non-discrimination watchdog, the antitrust regulator, they need a different kind of level of access to information. Already the competition watchdog can probe information, emails going years back. You know, they have a mandate to ask that. If you knock on the door as a journalist, forget about it. I mean, they would never hand it over voluntarily for members of parliament. I think it would be the same unless they may have to be forced to testify under oath. But, you know, this kind of access to information for regulators is unique, even if journalists and civil society and citizens also deserve to be able to scrutinize the kinds of services that, you know, are impacting society, are impacting their lives, the lives of their children, and so on. So I think it will be a matter of access to information guarantees, you know, with specific sort of levels and purposes. But certainly that's necessary. But I also don't want to give the impression that there's nothing that can be done now. You know, looking at the amount of cases of antitrust investigations, it shows that there are real concerns about lying, about mergers and acquisitions and bundling data, about other kinds of promises made and practices engaged in when it came to company behavior. So not only will we see a wave of regulatory initiatives, we will also see a wave of court cases playing out. I wanted to ask you about antitrust. Do you think that the laws that we have in place right now, are they fit for purpose? Do they really work when it comes to digital economies? I think they need a new interpretation and ideally a speedier process. Because, you know, for good reasons, companies can also appeal rulings. You know, companies have rights protections too. But these processes can be dragged out for so long that the effect of a sanction or a ruling can almost be, you know, foregone. You know, if there is an allegation that impacts the market today, but we will only find out in seven years whether there is going to be a punishment, then the company's hurt will have been bankrupt, you know, for decades. The founders will have already moved on and done new projects. So it needs to be faster. And there's obviously a challenge with which kinds of sanctions bite. I mean, a couple of billion dollars is an astonishing amount, but it's not when you actually make exponentially that. So it needs to be proportionate so that it actually has an effect, a deterring effect. And then there are questions about how to measure harm. So traditionally in antitrust, one of the big yardsticks was price. Right. So how does a consumer pay too much? And now some of the services are free, but you pay with data. So how to take account of that sort of cost or payment? I think are areas where there needs to be a clarification. There are other areas. If you look at Lina Khan, the lead of the Federal Trade Commission in the United States, she is actually alleging that excessive market power doesn't only have economic effects, but also societal effects. So I think there will be a big discussion and question of whether antitrust rules can be stretched to cover all those goals, or whether it's also legitimate to say, look, if democracy is under pressure, we're gonna go for measures that target those harms specifically. We're not gonna hope that market instruments like antitrust, which serve a very important purpose of keeping competition fair, but we cannot simply hope that the side effect of fair competition will be a better democracy. Sure. So I'd like to move on from Big Tech just for a moment, because I've read a piece you wrote in the FT, I think last month or the month before, about spyware. So we've seen the EU tighten these rules around export of surveillance technology, following the revelations about the NSF, Pegasus software. Do you think that we now need to get in place some kind of agreement between democratic states that goes further and actually brings into play surveillance technology, facial recognition technology, and that we need to have this kind of like, almost like a global agreement put in place? Well, absolutely. So the rules around spyware are a starting point, but they're actually a starting point but they're actually not comprehensive enough. For example, EU member states can still use spyware that they import, and there's no rulings about that, even though you really have to ask yourself, when Victor Orban is spying on journalists with Israeli-made software, there should be consequences. And so I think there's a number of areas, you mentioned them, facial recognition, some other technologies that basically have either disproportionate harms on human rights. I mean, spyware is sold as a counter-terrorism tool but it's used much more widely and I think the question is, is it proportionate to the stated goal? Is the proliferation, so the spreading across the world of these systems, not too much of a danger to legitimize the use in the United States or maybe even Finland, who knows, right? And so I think it is time for democratic countries to come together and say, this is where we draw a line, this is where we think technology harms democracy, fundamental values and human rights, and we are going to together create a critical mass and have rules around these systems, which will certainly not ban them everywhere, but it will begin to put more credibility on the side of democratic governments to say we are not going to engage in mass surveillance, we are not going to have social credit scoring systems, we are not going to hack the phones of our journalists because we actually stand for press freedom, for freedom of expression, for the right to privacy, whether it's online or offline. Another technology, and we're here at Slush, that a lot of entrepreneurs are working in the space of crypto at the moment, we've seen governments act way too late on the harms that social media companies have, the way the social media companies and misinformation has played out, how do you think we can avoid making that mistake with digital currencies, with crypto? Like how can we put in place rules that kind of create, you know, prevent harms from occurring? Well I'm smiling because I so vividly remember the days when I was still in the European Parliament and when the concerns about money laundering through crypto currencies were first sort of surfacing. And of course the question came, should we regulate? And then so many investors, founders, tech experts said, no, no, no, no, no, if you're going to regulate now, you will stifle innovation, which is always the magic sentence to oppose any regulatory intervention. And so actually, the EU has deliberately decided to hold off on regulations and also to make sure that, for example, an effort to prevent money laundering, which I think is legitimate and needed, but does not actually impede other uses of blockchain technologies, which have nothing to do with financial services or money laundering or whatever, can still be growing and experimented with. You know, fast forward to where we are today, there will be a sort of showdown between regulators, central banks and crypto companies. I have no doubt about it because, for example, I learned this week that 25% of young people in the Netherlands invest in crypto, you know, and invest slash speculate. It's a huge number, a huge risk. And so I think states will be more and more concerned of having a grip on their monetary policy of being able to manage risk because, you know, if it's a bubble that ends up bursting, the cost will spread across society. People will lose their savings. There will be possibly panic. There will be ripple effects. So I think it's to be expected that there would be guardrails put in place. Yeah. If people complain it's too late now, they can learn the lessons of, you know, being pushed against regulation when it was still early. Sure. Well, I mean, obviously the Netherlands being the place where the original speculative bubble happened with the two bit bulbs. Right. So just thinking about this and thinking the way that, you know, crypto could potentially move markets. It could have really, really significant effects on whether it's currencies or whatever. And technology now, technology companies are global actors. You know, we have, you know, kinds of situations with various EU countries, the US, the UK, not allowing sort of like Chinese companies maybe to be part of their critical infrastructure. How do you think about, how do you think that technology is going to play a role in geopolitics moving forward? Like how important do you think this is that we establish kind of like very clear norms and guidelines that, you know, are the international standards that everyone can kind of follow the rules so we know, you know, we know how to protect especially to like democratic institutions and democratic norms. Well, I think the moment where technology companies play a geopolitical role is already upon us. It is not the future, it is the present. Because we've seen a huge amount of outsourcing to companies to not only build critical digital infrastructure but to defend it. And now part of defence is also offence, especially in the United States has, you know, declared the practice. And so you can see that a number of tasks and responsibilities and roles that were historically squarely in the hands of the democratic state are now entirely or partially outsourced to companies. And I think concerns about that balance will grow and that there will be a rebalancing. But there's also a question mark. For example, if you look at the relationship between the state, the Communist Party in China and companies there, the question of how much autonomy will big, huge tech companies have and take in either adhering to rules or trying to circumvent them. So for example, a lot of US tech companies are on the one hand vying for government contracts, right? Very lucrative context, Department of Defence otherwise, but on the other hand, they have market interests in China or they have supply chains in China. And so will there be a moment where a choice has to be made, where the two cannot go hand in hand, you know, and will companies decide to stick with the democratic government that's their home turf or will they seek to maximize profit like the spyware industry has, completely devoid of any declarations of values, norms and principles. And so I think the power of some of these geopolitically relevant tech companies is already such that they may opt to sort of go it alone and drift further apart from the states of the countries where they were once founded. And of course the other area that is much discussed at the moment is how we implement ethical, artificial intelligence. How do you think about that? I mean, we've got three minutes left. Okay, great. I'm not expecting you to solve it, but what are the kind of like the top-line thoughts on how we approach that? So ethics is such an interesting phenomenon because it sounds so fantastic. And I think everyone in every process can use some ethics. You know, it's good to think about ethics at school, ethics in healthcare, ethics around how we deal with each other now in the COVID pandemic, but at the end of the day, ethics is a soft, debatable philosophical concept. I would encourage everybody to read the AI ethics guidelines of China. Just, you know, read them, and they will sound very accessible, almost similar to what may come out of ethics guidelines of the EU or civil society consortium or what have you. But the question is how can they be enforced and what real change will they ensure? And so I think ethics in that sense will not be enough because often they seem to be declarations of good intentions. Sometimes they are used as a PR exercise, and so I think you need hard backstops and more serious and forcible oversight over AI enabled processes just like we would with many other technologies used to achieve something. There seems to be, and this is a broader comment than just AI, but a sort of expectation that the use of technologies should somehow be free of oversight and regulation. And there has been a window where there's been relatively little, but that window will close and just like many other industries, transportation, automotive, chemicals, you know, there are risks and there are opportunities and it's important that there's also safeguarding of the public interest of people's rights, of public safety. And we've reached that point now with technology and I think with AI particularly, there are new challenges of how to understand the processing of data, how to assess the risk coming from unintended and unforeseen consequences and where should that risk play out? Can you just see what happens in society or should there be more research in a sort of guarded environment, draw lessons only then to decide how to move forward? So AI presents new questions already, new harms, right, of bias, of sort of unintended consequences compared to what it was designed for, but oversight will come, there's no question. Well, that's probably a great place to end on. Oversight will come, no question. What's going to be interesting is, you know, since the last time we talked so much has happened, you know, in Australia and elsewhere. So I'm fascinated to see what the next six months will bring in terms of regulation and the way that we really stop addressing some of these systemic challenges that you outlined at the beginning. So Marisa, thank you so much for your time. Have you here at Slush? Thank you so much.