 shop 2022. I have the distinct pleasure of introducing our keynote speaker, Larry Lessig, who's a professor of law and leadership at Harvard Law School. Among the legal scholars, Larry is known for many of his work over the past few decades, including creative comments, as well as advocating for the public domain, copyright reform and campaign finance reform. And of course, many of you know that Larry is no stranger to the Wikimedia world. In fact, the Wikimedia projects have benefited substantially from a lot of scholarly and underground work that he has done over the past two decades. In 1999 and 2000, Larry wrote two papers with the subjects, the law of the horse, what cyber law might teach and code is law. And these papers really revolutionized the way we approach policy questions related to software and code and more broadly the web. He taught us that with each line of code that we write, we make a decision about not only the code, but also fundamental issues such as freedom and privacy. And he anticipated some of the debates that we are still seeing today about regulating cyberspace when code is treated as law. In 2001, Larry founded Creative Comments, a non-profit organization devoted to expanding the breadth of creative work that is available to all of us across the world to build upon and legally share. Those of you who have published paper in the web conference have benefited from the CCY3 license, which is one of the licenses that the Creative Comments has put out there. Today, of course, Creative Comments licenses are core to Wikipedia's ability to surface text and media to people across the globe. Larry, thank you so much for accepting our invitation and being in Bikki Workshop. And please take it from here. Great. Thank you so much. So I'm actually in the bowels of New York right now planning the revolution. And it was not going to be a great space. I'm actually in like a painting room or something. It wasn't going to be a great space to try to coordinate the technology of making a presentation properly. So I got up at the crack of dawn yesterday morning and I recorded what I want to present to you today. And so I want to present it. It's about a half an hour long. And then I'm eager to take, you know, to turn that off and then come back for questions. So if it's okay, I'll just launch right into the recording and I hope that this is going to work. I'm going to turn my video off and then I'm going to start with the recording. So in 1999, I published the book Code and Other Laws of Cyberspace. And that book was quickly rendered as a meme. Code is law. But actually the point of that book was to tell a story about regulation generally. There was, I said, at the core of any regulation, the pathetic red dot, us. And we were regulated by many modalities of regulation. Law regulated us when the law says do not speed. Norms regulate us when we say, for example, we declare our pronouns. Markets regulate us when they say, we'll pay you $15 an hour rather than 10. And most interestingly, architecture regulates us. The world, we find it. My favorite MIT t-shirt, 186,000 miles per second. It's not just a good idea. It's the law. This idea that the way we find the world also gives us affordances and restrictions also is itself a kind of regulator. And as applied to cyberspace, that regulator of architecture is code. But even more fundamental to the argument I was trying to make 23 years ago was that at the top of this structure of regulation was the law. Because the law uniquely among these regulators could reach out and affect the other regulators. The law, of course, regulated directly. But the law could be used to change the norms. Think about norms around smoking to make it harder to engage in smoking. The law could regulate the market for cigarettes, by, for example, taxing cigarettes. Of course, we subsidize tobacco, but put that aside to make it harder to consume cigarettes. And the law could, as the Clinton administration considered, regulate the architecture of cigarettes themselves, reducing the nicotine in cigarettes to make the devices less addictive. The point then was that we need to recognize the mix of regulation and the mix can change. And that there was an incentive to change that mix, at least as it applied to cyberspace. The architecture of freedom that we celebrated so much at the birth of the Internet, an architecture that gave us free speech and privacy, and an open, neutral platform for innovation, would become, I argued, an architecture of control. So we needed to defend the code to sustain our values. Now, that point seemed to me obvious back then. Turned out it wasn't. First time I presented this argument to a group of technologists, a woman came up to me literally crying, saying, I don't do politics. She wanted not to do politics. I had to insist. Her code was politics. Or when my book was reviewed by the New York Times, David Pogue wrote, Plessig plays digital Cassandra and predicts that the Internet will become a monster that tracks our every move, but that no one will heed his warning. He went on, these discussions are thoughtful and measured, but the premise that frames them all is shaky. Plessig doesn't offer much proof that a Soviet style loss of privacy and freedom is on its way. And unlike actual law, Internet software has no capacity to punish. It doesn't affect people who aren't online, and only a tiny minority of the world population is. And if you don't like the Internet system, you can always flip off the modem. Well, that's true. You could always flip off the modem. But the point was that I thought Pogue was missing something back then. And unfortunately, I think 20 years later, most people agree he was. Okay, that was the argument then. But the point now is different. I think we need to recognize that me and my lawyer-centric view was wrong about the dominant force in this architecture of regulation. It's not the law that's at the top. It's the market that's at the top, the market. Because the market too can affect these other modalities of regulation. The market regulates directly, of course, when the market says, for example, the cost of living is high in one place and not as high in another place, this cartoon that I saw when I was in Taiwan. Or the market could regulate the architecture of cyberspace or of the world to make it so that the infrastructure is more addictive. Or the market can regulate the norms that define how we exist and get along with each other to make it so we are more eager to spend our time online rather than with each other, as Sherry Turkle puts so nicely in her book, alone together. Or the market can regulate the law itself by driving through campaign contributions or whatever other influences it might the law to change to better serve the market. That was the example we saw with copyright where section 230 of the Communications Decency Act setting up a notice and take down provision made it possible for platforms such as YouTube and many others to exist and to flourish. So the point together, if you think of all of these together, is that markets are regulating the regulators today, not democracy and its regulation through economic power. Now focus on one critical example of this regulation of the market in our world. The internet has a business model today, not everywhere, but it is dominant. That business model is advertising. Advertising has needs, it has the needs for engagement and data. It drives engagement to gather data. So how does it drive engagement? It drives engagement by hacking us. That might be an unfamiliar idea, but I think it's familiar if you think about the idea of body hacking. So think about food, not food like this, but food like this or this or this or most importantly something like this. I'm talking about processed food, food that is architected and designed and in that perspective behold the brilliance of this, the buffalo wings which are the perfect mix of salt, sugar and fat to make that food addictive. It's a miracle in the sense that tide is a miracle because it's just a choice, a design choice by food scientists, by food architects, people who work to engage in a process of science to engineer food to overcome a natural resistance that we might otherwise have to this food so that we just can't stop consuming it. These scientists are exploiting evolution with the aim, the aim obviously, to make money. Now for some people this is perfectly harmless but for other people it is not and they know, the companies know, that we can't resist what they offer us. That's what their scientists tell us. But the key to recognize is that they too, the companies, can't resist providing what they provide. Michael Moss in this fantastic book, Solved Sugar Fat, tells the story of executives in the processed food industry who come to know, come to recognize the harm they are doing to the consumers who consume their food and they choose to try to make their food more healthy and then of course people don't want to eat the healthy version of their processed food and so the market for those products declines and then the executives are quickly kicked out and the companies go back to their old ways claiming the market made us do it. Okay so that's body hacking. There's an equivalent in the internet context we should recognize as brain hacking. Tristan Harris, former Google engineer now founder of the Center for Humane Technology, describes a similar science. It's the science of Silicon Valley. It's the science to engineer attention to overcome resistance not of bodies but of brains. The means are the same exploiting evolution. Turns out we are just wildly responsive to random rewards or turns out we just can't resist bottomless pits of content and so by overcoming resistance they are addicting us to engage in their platform and then they collect the digital exhaust the mouse droppings to better understand what we want not just by watching us as they might sit by on the side of a road and count cars that go by but by being active and poking and tweaking and asking us rendering us vulnerable as users of this platform reaching down as Harris puts it the brain stack to leverage our insecurity so that we reveal more this is the business model of Facebook and Instagram so that we do more so that we reveal more so that they see more so that they can sell ads better. This is the world of surveillance capitalism as Shoshana Zuboff describes it in her magisterial work with that title and all this makes they tell us the internet possible it gives us the internet and it gives them a business it's win-win from the perspective of the internet makers except that it's not because the unintended consequences and let's really hope they are unintended the unintended consequences of this surveillance are devastating they're devastating for individuals as we see an extraordinary rise in teen suicide especially among younger people but they're devastating for society and particularly for democracy because as the AI that is embedded in these platforms selectively amplifies and suppresses content for the purpose of driving our engagement as it elevates the crazy and suppresses the balance because it just turns out we are so susceptible to spreading and engaging with the crazy and not so much the sensible its choices the choices of these AIs have effects and one effect in the context of democracy is to drive and spread hate because it just turns out too bad for us but it just turns out because the best strategy for internet platforms to drive engagement in the context of democracy is to play out the politics of hate if they can render us polarized and ignorant then we engage more if they can make us hate the other side not just as different but as enemies we engage more the politics of hate is the most profitable for them even if it is wildly unprofitable for us as Zainab Tufachi puts it companies are in the business of monetizing attention and not necessarily in ways that are conducive to health or success of social movements or the public sphere now we can see this everywhere but never so clearly as we saw in the extraordinary events of January 6th in those events where people were rallied from around the country to travel to Washington DC to stop the steal we saw the manifestation of this infrastructure not just the internet but this infrastructure plus cable television to make people believe what makes the platforms the most money now the point is these people on January 6th were not all crazy people there were some many who had mental issues but not all many of them were people who had just been led to believe to believe that the world had conspired against them because the platforms had convinced them as much now many people think it's just you know the uneducated but as the Washington Post reported immediately after the January 6th events over 70 percent of Republicans said they agreed with President Trump's contention that he received more votes than Joe Biden nor was this belief limited to those with lower levels of education a majority of Republicans with college degrees in our sample said they believed the election results were fraudulent as they go on we asked voters whether they thought that quote millions of fraudulent mail and absentee ballots were cast and whether quote voting machines were manipulated to add tens of thousands of votes for Joe Biden finally we asked respondents reactions to the statements that quote thousands of votes were recorded for dead people for each of these false statements because of course those three statements are absolutely without any basis in fact or evidence so for these three false statements more than 50 percent of Republican respondents said that it was very accurate over 75 percent of Republican voters said that each one was very accurate or somewhat accurate only about three percent of Democrats assessed these conspiratorial statements as very accurate this is the product of a media infrastructure that places us into bubbles and speaks to us differently acoustically separated bubbles speaks to us in ways that feed what we want to believe rather than what's true and that feeding has an effect as we see so many believe what we all can see just is not true the consequence is a business model that profits from harming us from harming our democracy because what pays them weakens us and it even weakens them extraordinary Facebook files revealed last year by Francis Haugen the Facebook whistleblower told a story of an extraordinary company Facebook a company that had fundamental values that were at the center of its business values of privacy and integrity and safety and more importantly it had an extraordinary range of engineers decent honest people committed to these values engineers who do everything they could to get the business to support and strengthen and defend these values and they had plans they had ideas they had ways to make sure that the platform would be conducive to a healthy democracy not constructive of the craziness that we saw but what those engineers saw in the conflict between their recommendations to support privacy and integrity and safety was a business model that eroded those values because it was inconsistent with the fundamental commitment of that platform to drive engagement to drive profit now this is inevitable we need to understand especially in the age of AI and think about an idea of instrumental rationality the extent to which some entity or group is capable of being rational to some end some particular end and if you think across time and across the range of instrumental rationality we can recognize that we humans have had a pretty good run for a long time we're we're better at having an engaging instrumental rationality than cows for example i mean maybe not ants and bees but cows or sheep or um maybe uh even um well not rats but you know you pick your animals that you think we uh actually are better at than they but we need to recognize the corporations are better instrumental rational actors than we are they can focus on their objective and achieve it more consistently more reliably than we humans can and even better than the corporations are the artificial intelligence infrastructures that get built into the corporations and into our lives now each of these instrumentally rational actors purports or tries to control the one that's above them we try through democracy to control corporations we believe that our government has the rights and the capacity to control the corporations and the corporations believe that through their management they have the capacity to control AI they think that they can set objectives objectives of maximizing income subject to the need to be safe or healthy or maintain the integrity of information that they spread but what we're seeing increasingly is a very different reality of control increasingly it's AI that is controlling the corporations as we saw with Facebook or with a particular example in 2017 from Facebook where it was revealed that Facebook's ads were offering a category of Jew haters because the AI had developed Jew haters as a potentially profitable category now of course there was no human at Facebook that picked the category Jew haters no human would but the AI did because that was just what was maximizing its value and the corporation which had set up the management infrastructure to direct the AI failed to control this particular innovation of the AI or corporations of course increasingly control us humans this is a picture of super PAC spending which has exploded since citizens united and more importantly the DC circuit case of speech now making it possible for unbelievable amounts of money to be concentrated in the hands of very few in the last election cycle the 10 largest super PAC spent more than half of the super PAC money that was spent and the super PAC money has become the dominant form of spending in the context of political elections this is a way for those with that money primarily those with corporate interests to leverage that power to control us so in these three models of instrumental rationality increasingly we should recognize it's the AI that will dominate all of them and that in this dance for the corporations to regulate or control or condition the AI we should see that the AI's signal that this is the way to maximize what you told me to maximize will increasingly dominate so if in 2000 the lesson was code is law I think in 2022 we should recognize business model is law this is the slogan that should be our focus okay so what's the relevance of this to wikipedia I think it's critically relevant because of course wikipedia has a business model but wikipedia's business model is different from facebook's because of core choices by the founders wikipedia was grounded in an anti advertising context the commitment not to sell ads was fundamental and incredibly important because what this means is that the core norms of wikipedia that it be free that it be neutral that it be well sourced would not be resisted or overridden by the consequences of an AI driven focus on maximizing advertising revenues but could be supported because the infrastructure of that business model would support these values there was no systematic AI driven force against these values because you can afford to speak the truth in the context of the wikipedia ecology because falsehood is not more profitable this is a business model it's a business model that protects norms and those norms we protect because we need to protect to derive the most extraordinary innovation on the internet in my view which is the ecology of wikipedia now this example I think is critically important and not remarked enough what could it teach for example the news if we think back to the time when the news served a uniting integrating an informative role in american society what people call the broadcast democracy age of american democracy when people were focused on the same story through a very small number of mediums the business model of that news was not to maximize on the basis of driving engagement the business model was simply to inform but when the business model changed even in cable television we could see it change what those platforms became was something fundamentally different this is I think the scariest graph in american politics this is a picture of the ideological content of the cable news networks and what this shows is that circa 2000 fox and msnbc and cnn were basically ideologically the same but over the period since 2000 we've seen a radical divergence as those companies learn that the business model of driving efficient engagement by their users is served best by polarizing and separating their base so that their base becomes committed and driven by the worldview they offer rather than committed or driven by the truth or what could the lessons of wikipedia teach us for democracy I mean obviously the business model for democracy is inherently for candidates to win but the question we should ask is at any cost with micro targeting unnecessary means because of course we would have winners in elections without a i driven micro targeting or maybe more accurately a i driven nano targeting that can be targeting even different moods you and individual have let alone people within your household or people within your neighborhood or people within your city or state or demographic we would have winner winners in democracy without the democracy destroying culture that this platform is creating I think the lesson here should be that we need to choose the business model based upon the AI maximizing effect that we can see that business model will produce and avoid the business models with inevitable externalities with unavoidable externalities that we can see will weaken or harm our society but so far we haven't done that so far we've allowed these infrastructures to develop without any reckoning of what they will do for us and for our nation I think our challenge is to do something now before it is too late because one might question whether it is too late and at least I increasingly believe it just might be too late okay so let me try to draw the arguments of this talk together then back in 2000 when I allowed this book to be summarized with the slogan code is law what I meant to be saying is that we needed to choose our code to defend our code to defend the infrastructure that the internet would be to protect the values we thought important we needed to choose our code to defend free speech and privacy and the free opportunity for innovation that end-to-end or network neutrality established and that we needed to defend those values maybe by regulating code but certainly by regulating to assure that the environment of the internet would defend those values that was the focus then because the most significant influence that would determine whether those values survived was the influence of code now the story is different now our focus should be the business model now we should say the business model is law or maybe we should say business model eats law and like with code what that means is whether you need to choose the business model recognizing its AI driven dynamic based on the values it supports and the society that it builds that must be our commitments at least if we still can thank you for the chance to present this to you and I'm eager to see the questions or the objections or the resistance or the ideas it might inspire thanks very much thank you so much Larry for the presentation and sharing this with us I believe Isaac you are moderating the Q&A so if you would like take it from here please sure I'm just I'm going to actually pass the mic to Jimmy who asked for a moment yeah just for a second thanks so much Larry this is uh it's great hearing from you and all this obviously I agree completely with everything you said and it's interesting to me to think about how much of my thinking was in the early days was influenced by your thinking and how some of the decisions we made were consciously if a bit vaguely aware of the issues you're talking about and others I think were just dumb luck that we went down one path instead of another not realizing the depth of the implications of where we're in this up and just the one thing I wanted to to note and I'm just gonna you know it's just a comment really on some of the values that we strongly represent as the Wikipedia and the Wikimedia movement some of them are very much intimately tied to that choice of business model so we've always been a very very principled about freedom of expression we don't bow to government demands to warp the truth that's in Wikipedia and one of the reasons is we don't have these sort of very hard internal conversations that I assume people like Twitter or Facebook must have if they want to sort of support freedom of expression but they say well we're going to get blocked in Turkey if this happens we don't think oh we might get blocked in Turkey so we better modify Wikipedia to suit the government there we actually think if we stand up for it our donors are going to be really happy with us our community is going to be really happy with us our business model actually incentivizes us to take a principled approach to these things rather than a page view based approach so anyway fantastic talk I'm going to let others ask questions thank you Jim yeah so the the first question is what do you think about the data for good initiative for Facebook and I guess I would make that a little bit broader that larger data for good movement well I mean what I found most surprising in the Facebook files I had the privilege of representing Francis Hogan for the very beginning of that fight that she was having to defend the ability to release that information was the depth of goodness inside of the company itself that the engineers and the character of the engineers and the number of exchanges um that those uh Facebook files reveal where engineers are pushing hard to do the right thing and then get overridden by management basically Zuckerberg or other similar sorts was was really inspiring so I don't doubt the good faith or the good intent of movements like that um what I am skeptical of is that they can survive unless they've been insulated from the dynamics of the business model so I want to hear that before I say whether I think it's it's going to be effective or not I want to know exactly how they are insulated what is the way in which they're not going to worry and and if that's it's not like they couldn't do it but until that's clear I think we should remain skeptical thanks the next questions from Shri John Shri John would you like to ask yourself or shall I go uh sure I can ask thanks uh thanks so much Larry for your talk I have a couple of questions I'll ask one and then give chance to others uh what are your thoughts on the arms race that's there between the malicious actors on social platforms that are trying to spread hate propaganda and all the bad things that you talked about so these bad actors and the detection systems that these platforms have that are trying to catch them and there's this arms race uh and I wanted to hear your thoughts on that and the role and responsibility that you feel that these platforms should have or have uh in in terms of adapting to these scenarios or catching them yeah I um again I think the Facebook files and Jonathan H has summarized this really powerfully in an article he published in The Atlantic they reveal that the whole effort to suppress bad content is um hopeless that it only ever suppresses a tiny tiny fraction of content that we should be worried about and only in nations which have um relatively well known languages so you know a huge amount of content on Facebook you know 90 percent of Facebook's audience is outside the United States and a huge amount of that content in the context of the developing nations is in context where the platform can't even understand the content can't even understand the language um and so the reposting in those contexts is a source of extraordinary hate and you know obviously allegations of genocide have been pretty well established in some contexts so I think that we have to focus not so much at the level of how do we identify the bad content and figure out what to do with it but how do we undermine the incentive to be spreading the bad content or the structures that make it much easier so Facebook discovered that if they just turned off the ability to repost automatically after two reposts so you can't just simply click a button and repost content if it's been reposted twice before you'd have to copy the URL and post it directly then it could it could eliminate an extraordinary amount of misinformation that was on their platform simply by slowing the platform down to kind of human speed you could bring humans into the process of making a better evaluation of whether they should be sharing that content I think that's the more effective way to think about addressing these problems rather than imagining some super AI content moderator who's going to be able to figure out the subtle meaning of hate in these different environments thank you thanks Larry thanks Leo you're up next would you like to ask sure uh so Larry you talked about business model is law and I'm thinking about the situation of the governments now who are who still believe and see profit in falsehood and how does this focus in on business model is law going to play at the same time that these seemingly very strong governments are also involved in investing more and more in falsehood yeah I mean so when I put this slogan out it's going to cause I'm sure as much confusion as code is law cause so I you know I don't take responsibility for the confusion maybe I should but when I put that out I don't mean to say that this is the only problem we should be focusing on it's a particular problem that we should be focusing on right now in a world where antitrust has not yet addressed these extraordinarily powerful dominant platforms and it's not clear it will in the next 10 years um it's still amazing to recognize that the last major antitrust case involving the internet I was involved with it was the Microsoft case where for a brief moment a nanosecond I was a special master in that case but the idea that that was the last time we had serious antitrust review of these platforms is astonishing but in addition to the problem caused by the the dominant power of the platforms we have to worry about governments and especially again governments in contexts where violence can be even more effectively delivered and and of course the thing to recognize is that is the conjunction of interests between governments and these platforms I mean if the governments are using the platforms to spread their hate or to spread their authoritarian falsehoods the platforms profit from that too to the extent that the platforms at least are being compensated to do that um so I think that thing we need to encourage our locations on the internet we can reliably understand our locations of truth again you know wikipedia is kind of unique in that it is astonishing the number of my colleagues who 10 years ago would have been so skeptical wikipedia as a source of anything but who now are open about the value of wikipedia in establishing a kind of foundation for understanding in truth and you know I not that I am eager you know I'm not trying to break your monopoly on truth but I would like a thousand wikipedia like locations to be out there where people can believe that they're stepping on something that has a foundation in reality as opposed to something else foundation in the interest of either a corporation or the political interests of a government all right I'll pass on the bob now if you'd like to ask your question yeah thank you thanks Isaac and thanks Larry I was I was wondering um I wanted to poke your brain about the concept of recommender systems because that's at the core of a lot of this right they they nano-optimize to your to people and to the moods of people by the hour and minute and so on but they also seem a necessity to even deal with all the content that's created all the time right so we can't really compare it to the cable news era because there wasn't this amount of content to be curated now it's just it's just too much so I'm I'm wondering to what degree what are the options we even have because sure we could try to turn off the recommender systems but then everything would we couldn't cope anymore and to what degree are these just baked in consequences that don't need any malice even they just come with the technical implementation of these systems and do you think that the challenge needs to shift to actively work to keep the recommender systems usable but where's the line where how can we say this is now bad versus good recommending yeah it's a great question I mean I think the first thing to recognize is you know for most of the history of humanity we didn't actually have to deal with all information that was out there most of the information that was out there the comments of people the fights of your neighbors all of that was invisible to you and we got along pretty well I mean we could run democracies you know Europe in Japan and in Korea and the United States we could run democracies quite effectively without everybody having access to all information and now we've turned on the fire hose of all information and now we're kind of trying to figure out how do we deal with it and the challenge that I'm first identifying is we turned on the fire hose but with a very poisonous incentive um for the fire hose to be tweaked to drive engagement against the background of humans who unfortunately for us are turned on more by outrage and hate than are turned on by poetry and and well thought out arguments um you remember um well I don't want to get this too far as the diversion okay so um so you're right I think that even in the best world we're going to have structures of recommendation engines and I think the most we can do is first monitor the incentives of the recommenders like are you a recommender like I don't think there's any problem to being like a right-wing recommender or a left-wing recommender or a recommender for like um you know good food or whatever um but you might be concerned when you understand that the recommender has a system of incentives that is driving them away from recommending the best right-wing content as opposed to the right-wing content that's paid for by X or Y we at least need to understand that so the structure of incentives is number one and number two we have to recognize that the that it's a very complex system that we are we have implemented and we need lots of data to understand the sensitivity of any of these architectural choices to the reality that they produce so transparency about the data and and and how it's actually mattering would be extremely important of course the platforms have resisted so far the call of researchers to have access to their data to be able to make an evaluation of like you know what what is the consequence really of turning on the repost all the way up to 10 different reposts like what is the real consequence for that instead they have to rely on their own researchers and of course the consequence of the facebook files is i'm sure facebook has been doing a lot less internal research because they're afraid of those data being revealed to the public well somebody needs to be doing that research and i obviously support the idea of making the data accessible to researchers um who can be making the evaluation so that we can learn which which architectural choices um against the background of certain business models are safer than others because i don't think we know that thank you um the next question comes from octi octi i'll give you a chance if you want to ask it yourself otherwise i'll proceed okay hi yes uh this is octi thanks for the opportunity and a great talk um i was wondering you know also since jimmy is here uh since the start of you know when when i started using wikipedia i always wondered why you don't have like a tiny box on each wikipedia article with with an ad in a very uh you know as harmless as possible way uh verified you know if for example if you listen to npr they have these tiny bits of uh selected of uh you know commercials and they also whenever there's the news is about those supporters they declare it so i um i feel like not an expert in this space i'm a computer scientist but i feel like there there could be a way in theory to make the business model less harmful and i was wondering what your thoughts would be and if the answer to this is yes it's possible then wouldn't it be possible to have um regulations that every single ad supporter uh social media platform or internet platform should follow these strict regulations so that they don't end up being so harmful well i think that you're pointing to npr is a really important um uh recognition because i mean in fact there've been lots of claims that npr's acceptance of sponsorship by certain entities like oil companies and the like has affected their reporting um that that the managers in the context of deciding um which things ought to be covered can't help but worry about whether the things they cover might affect their ability to get sponsorship in certain areas so you know even though i think our perception of npr at least in the united states at least among us liberals is that it's a good site it's it's also got its integrity or corruption problems and and you know i'll let jimmy speak for wikipedia but i would be terrified with the idea of wikipedia embedding any advertising dimension into what they're doing it's not because it's disruptive to the reader i mean of course you could you could frame it so it's just a tiny little box to the side but the question is like you know as the middle managers of wikipedia are trying to figure out how to maximize um revenue for advertising what decisions do they make what kind of recommendation decisions do they make or when they tell the ai that you know you might develop inside of wikipedia here's one of our maximizing dimensions what does the ai do i mean i think all of those are very difficult choices to do and to get right um but then the third point is let's imagine you came up with the best possible structure of record of uh integrity for advertising why not just impose it on all other sites well i do think we need to think about regulating um targeted advertising in certain contexts of other platforms um but in the united states that's going to be extremely difficult because the um uh first amendment has been raised as an absolute bar to this kind of regulation a decade ago my friend Eugene Volek um was hired by google to write a paper that basically said algorithms are speech and so therefore you can't regulate algorithms you can't muck about with that at all because that's core political speech i i think the argument's crazy but if you asked you know what's the standard of courts today most courts would take something closer to Eugene's view than to my view um and and so i think the united states we're going to see a lot of constitutional limitations to the ability of the government to deal with these questions um you know i've been saying we ought to be regulating the business model people say there's a constitutional right to your business model i think that's crazy but you know i think a lot of things are crazy and the court seems that differently um but i think that there is a chance in europe in particular which has um which has taken a lot of aggressive steps again i think pushed by the facebook files um to experiment with different kinds of regulations and i'm hopeful that if europe trips on the right kind of regulations that minimize the poison at least in the context of democracy speech um that other countries will find a way to integrate it into their own regulations uh the first amendment that was standard jimmy if you're still here did you want to make a response as well all right i will go on to the next question then it's coming from matt um and he asked are you able to speak to wiki data's use of cc zero licenses and the commercial appropriation of cc zero license to metadata and applications such as google knowledge graph um i i'm a little bit hesitant because i don't know the facts well enough um my own view is um you know data is a complicated licensing space because of the difference between the europeans and the americans about whether the underlying resource can be licensed at all and my own my own view is very uh libertarian about it we ought to be enabling as much data to be usable as possible and to the extent there are privacy concerns we should not be looking to copyright law or licenses to be dealing with the privacy concerns i think there is an essential need for privacy regulation and an essential need for data misuse regulation um but that's it but that in my view is just independent of this crude hammer of copyright so my general bias is in favor of cc zero for as much as possible um and uh and you know if it helps commercial entities do commercial things then i think that's you know in a richard storm and sense that's that's what free that's what freedom means thank you pablo you're up next would you like to ask your question thanks i said thanks for the talk so speaking of this model uh could a different form of monetizing the internet i want to say i'm monetizing i mean quantifying value like that value that will be maximized by artificial intelligence or by recommended system a way of quantifying value rather than clicking behavior that is not based on creating clicks but in a more sophisticated forms of value could have a positive impact on the dynamics of web platforms well i think that you know there's been a long tradition at trying to think of these alternative structures and the cori dr roe's wuffy um uh early on was like imagining a way of rewarding behavior that wouldn't be directed wouldn't be linked to something like clicking and and i think what we ought to be committed to is the idea of experimenting with lots of different um uh ways to try to do that to monetize and to and to uh recognize support for this infrastructure we should also be open to a very traditional answer which is um you know public support for infrastructures um which you know the nature of public support for infrastructures at least ideally obviously governments are not always good at this but is to remove the support from the question of the corruption of the integrity of the platform um and so i would be in favor of trying all of these and looking for all of these but in a world where we have transparency to understand how in fact these different regimes are affecting behavior and the spread of spread of information daniella you're up next would you like to ask oh yes thank you sorry for the noise there is a storm outside but uh so i wanted to ask the market and the law as you said are powerful forces that are able to perform the influence uh the evolution of the internet and sometimes in good ways for example by financing some innovative technologies uh uh or protecting the rights of the users uh but sometimes in bad ways um so projects that wikipedia and wikidata show that it is possible to have self-regulation uh where we also have the community as a very powerful force that is able to uh in some way uh interface with the other forces and uh and stand itself uh by itself to uh to drive to drive its own destiny so i wanted to ask in your opinion what is the best way to support uh and incentivize such a model where the and how can we make sure that such model can survive and can thrive in the current environment yeah um i think it's an it's an incredibly important um resource uh to to deploy and to rely on but we need to be constantly skeptical about the quote i mean the content of the quote community because sometimes communities can become themselves corrupted so that interested parties populate the community with um you know quote community members who then affect the force of those interested parties as opposed to just the community so i think that as long as that dimension of awareness is there i i think it's a it's it's a very valuable thing to add so i apologize um i thought we were going to quarter after and i've got a cab outside honking might have just heard the honking um so i have to flee um don't worry thank you so much larry okay great thank you guys thank you