 Thank you, everybody. Welcome to the Carnegie Endowment for International Peace. My name is Timor. I co-direct the Cyber Policy Initiative here at the Carnegie Endowment. And together with David Brumley, who's the director of the SILAP Security and Privacy Institute at the Carnegie Mellon University, we're delighted to welcome all of you here in person and for those of you joining us on the livestream online. The hashtag for this event is Carnegie Digital. And I now have the pleasure of introducing Ambassador Bill Burns to you for the welcoming remarks and look forward to this day with you later today. Thank you very much. But good morning, everyone, and welcome again. Let me begin by congratulating Tim and David and their Carnegie Endowment and Carnegie Mellon colleagues for putting together this extraordinary colloquium. Delighted to launch today's event was Suva Suresh, whose remarkable leadership of Carnegie Mellon reminds me of how fortunate I am to be a part of the extended Carnegie family. As president of the Carnegie Endowment over nearly the past two years and as a diplomat for 33 years before that, I've had the privilege of welcoming heads of state, military generals, foreign ministers, university presidents, and distinguished thinkers and doers of all stripes. But I've never had the privilege of introducing a robot, let alone several. So it's a great pleasure to welcome Snakebot, and their friends to today's event. Like all of you, I look forward to getting a glimpse of our robotic future later in today's program. Robots are not today's only first. Today is also the first of two events we're holding for the first time with Carnegie Mellon University, one of the world's premier universities and a fellow member of the impressive group of institutions founded by Andrew Carnegie more than a century ago. Andrew Carnegie created these institutions at a critical historical juncture. The foundations of the international order that had prevailed for most of the 19th century were beginning to crack. Catastrophic war and disorder loomed. And the last great surge of the industrial revolution was transforming the global economy. The Carnegie Endowment together with a number of its sister organizations sought to help establish and reinforce the new system of order that emerged of the two world wars. A system that produced more peace and prosperity in the second half of the 20th century than Andrew Carnegie could ever have imagined. It's hard to escape the feeling that the world is once again at a transformative moment. Profound forces are shaking the underpinnings of international order. The return of great power rivalry and the rise of conflict after many years of decline. The growing use of new information technologies, both as drivers of human advancement and as levers of disruption and division within and among countries. The shift of economic dynamism from West to East and growing pressures of economic dislocation and stagnation. And the rejection by societies in many regions of Western-led globalization and the embrace of an angry fortress-like nationalism. Here at the Carnegie Endowment, we're trying to meet these challenges head on across our programs and our six global centers. We focus this colloquium and our partnership with Carnegie Mellon on one of the most significant of these challenges, the intersection of emerging technologies, innovation, and international affairs. Technology's capacity, as all of you know very well, to simultaneously advance and challenge global peace and security is increasingly apparent. In too many areas, the scale and scope of technological innovation is outpacing the development of rules and norms intended to maximize its benefits while minimizing its risks. In today's world, no single country will be able to dictate these rules and norms. As a global institution with deep expertise, decades of experience in nuclear policy and significant reach into some of the most technologically capable governments and societies, the Carnegie Endowment is well positioned to identify and to help bridge these gaps. Earlier this year, we launched a cyber policy initiative to do just that, working quietly with government officials, experts, and businesses in key countries. Our team is developing norms and measures to manage the cyber threats of greatest strategic significance. These include threats to the integrity of financial data, unresolved tensions between governments and private actors regarding how to actively defend against cyber attack, systemic corruption of the information and communication technology supply chain, and attacks on command and control of strategic weapons systems. Our partnership with Carnegie Mellon seeks to deepen the exchange of ideas among our scholars and the global community of technical experts and practitioners wrestling with the whole range of digital governance and security issues. Today's event will focus on artificial intelligence and its implications in the civilian and military domains. Tim and David have curated an exceptional set of panels with diverse international and professional perspectives. On December the 2nd, we will reconvene in Pittsburgh for an equally exciting conversation on internet governance and cybersecurity norms. Our hope is that this conversation will be the beginning of a sustained collaboration between our two institutions and with all of you. There is simply too much at stake for all of us to tackle this problem separately. We can and indeed we must tackle it together if we hope to sustain Andrew Carnegie's legacy. I'd like to conclude by thanking Barton Gregorian and the Carnegie Corporation of New York for making this colloquium possible and for everything they've done and continue to do to contribute to a more peaceful world. And let me now thank and welcome to the stage Super Suresh, an extraordinary leader of an extraordinary institution and a terrific co-conspirator in this endeavor. Thank you all very much. Thank you, Bill. I also want to thank Tim and David for all their efforts. Welcome to the inaugural Carnegie Colloquium, part of an initiative to inform and shape global norms and manners of cooperation in artificial intelligence, machine learning and cybersecurity. First and foremost, I would like to thank Ambassador Bill Burns for hosting this event today. As two organizations that reflect the strong legacy of Andrew Carnegie, Carnegie Mellon University and the Carnegie Endowment for International Peace have formed a powerful partnership to examine technology and diplomacy across a set of emerging areas critical to our collective future. It's my sincere hope that this event, as well as the follow-up colloquium, which will take place at Carnegie Mellon University on December the 2nd, form the basis of an even broader and closer relationship between our two institutions. Let me also add my thanks to Dr. Vartan Gregorian, President of the Carnegie Corporation of New York, who provided support for both of these events. And in fact, it was based on a conversation that Ambassador Burns and I had a few months ago and Dr. Gregorian was very enthusiastic and supportive of this effort. To understand Carnegie Mellon University's importance to artificial intelligence, machine learning and cybersecurity, we must first recognize CMU as a place where pioneering work in computer science and artificial intelligence took place decades ago. And ever since Herbert Simon and Alan Newell created the ingredients of artificial intelligence in the 1950s for the terminology was even recognized broadly. CMU has remained at the cutting edge of this field. Carnegie Mellon took the bold step a generation later to create its software engineering institute, which has served the nation through the Department of Defense and served industry by acquiring, developing, operating and sustaining innovative software systems that are affordable, enduring and trustworthy. Designing safe software systems and attempting to create the learning abilities of the human brain where natural progressions toward two of the modern world's most pressing concerns, cybersecurity and privacy. To meet this challenge, Carnegie Mellon's cybersecurity and privacy research is multidisciplinary and encompassing a broad range of disparate disciplines. It incorporates faculty from across the university with strengths in areas such as policy development, risk management and modeling. Our aim is to build a new generation of technologies that deliver quantifiable computer security and sustainable communication systems and the policy guidelines to maximize their effectiveness. CMU's premier research center on the subject is SILAB, a visionary public-private partnership that has become a world leader in technological research, education and security awareness among cyber citizens of all ages. By drawing expertise of more than 100 CMU professors from various disciplines, SILAB is a world leader in the technological development of artificial intelligence, cyber offense and defense and is a pipeline for public and private sector leadership in organizations as varied as NSA and Google. The work of SILAB Professor Mario Savedes, for example, was featured in NOVA program in 60 minutes report on machine learning and many other aspects. In particular, Professor Savedes' facial recognition programming helped match a very blurry surveillance photo with the Boston Marathon bomber from a database of one million faces. You will have an opportunity to see Professor Savedes' work in action today during the lunchtime demonstrations downstairs. Today you will also hear from SILAB's director, David Brumley, who led a CMU team just a couple of months ago that won this year's Super Bowl of Hacking, DARPA's $2 million cyber grand challenge. Congratulations, David. Just a week later, a week after that, David took a team of CMU students to DEF CON in Las Vegas, where they won again in another hacking competition. Finally, you will hear from Andrew Moore, the dean of our School of Computer Science, who was also recently featured in a 60 minutes report on artificial intelligence. I would also like to acknowledge Dr. Jim Garrett, the dean of the College of Engineering at Carnegie Mellon University who joins us along with Rick Seiger, who played an important role in helping put together this event between Carnegie Mellon and the Carnegie Endowment. CMU's advancements in artificial and cybersecurity will be highlighted in the colloquium today, which is an outgrowth of the partnership between our two organizations. You will learn more about this in the two panel discussions today. We hope that these discussions on the future of consumer privacy and autonomy in military operations will lay a strong foundation for future colloquia and they will better inform ongoing thinking and technology and diplomacy in these critical areas. I would like to welcome you to the colloquium today and I would also like to close by thanking again Ambassador Burns, thank you. So we will now get started with the first panel discussion. Before we start, let me briefly outline the two key ideas that have been driving this event. And when David and I started with the planning for this, the first one was essentially to bring together the technical experts of Carnegie Mellon University and the policy expert from the Carnegie Endowment. That is why each panel is preceded by a setting the stage presentation with one of the technical experts from Carnegie Mellon University and then will be followed by the panel discussion. The second idea was to draw on Carnegie Endowment's global network to bring in people from around the world for the panel discussion. So I'm particularly pleased to not only welcome our partners from Pittsburgh, but also to welcome, for example, you had to has come all the way from Hong Kong. Speaking of Pittsburgh, if you're interested to join the event on December 2nd, please make sure to drop your business card off outside or send us an email. And I would now like to introduce Andrew Moore, who's the Dean of the School of Computer Science at Carnegie Mellon University. And the Computer Science School at Carnegie Mellon University has been ranked as the number one school by US News repeatedly in the past few years for the grad school program. And Andrew, prior to becoming Dean for the last few years, was Vice President of Engineering at Google Commerce, has been on the faculty of CMU since 1993 and was named as a fellow at the Association for the Advancement of Artificial Intelligence in 2005. Keeping with the global theme of this event, he originally hails from Bournemouth in the United Kingdom and it's a pleasure to have you, Andrew. Thank you very much, Tim. So this is a really interesting and exciting time in the world of artificial intelligence for many people. For regular consumers, it's got great promise for them. For companies, it is an absolutely critical differentiator. And for societies in general, we do have options here to make the world a much better place through careful application of technology. What I'd like to do to set the stage here is talk about two things which are, which at first sight sound like clear goods. Personalization, I'll explain what that means, and privacy, two extremely important issues. Then I'm gonna run through a series of cases where these two great principles start to bump into each other and they will get increasingly sophisticated. And by the end of this stage setting, I hope to have everyone kind of squirming in their seats because it's so annoying that two wonderful and important things, privacy and personalization, which seem like clear goods, lead us to very difficult societal and technical challenges. So that's what I'm gonna try to do in the next couple of minutes. All right, so let's begin with privacy. It's a clear right and we would almost all of us agree that anyone who intentionally violates privacy by revealing information which they gained in confidence is doing something bad. And there's laws in our international legal system and in all our domestic legal systems which deal with that issue. So that's important. Personalization, personalization is probably one of the most critical features of a world based on artificial intelligence and machine learning. And I'll explain some places where it's obviously good. Many great institutions, including Carnegie Mellon under Dr. Suresh's leadership, are pushing very hard to understand how we can help children learn more effectively. If it turns out that I as a child have problems with understanding when to use the letters CK while I'm writing, it makes a lot of sense for an automated tutor to personalize their instruction to me so that they can really practice that particular issue with me. No doubt about it, that seems like a sensible thing. If I'm a patient in a hospital and it becomes pretty clear that unlike most patients, I cannot tolerate more than a certain amount of ibuprofen within 20 minutes of a meal. As we learn that, of course it makes sense to personalize my treatment. So that is good. And at the moment, there's no difficulty involved. Here's where it gets interesting. Some aspects of personalization, like for instance, how I'm likely to react to some liver cancer medications. It's not like we can personalize it by just looking at what's happened to me over my lifetime, because probably this is the very first time I've ever had those medications. When you're an artificial intelligence engineer building a personalization system, the way you power it is to find out about me and then ask the question, so to make things good for Andrew, what should I do and what can I learn from other people like Andrew? And that's suddenly where you begin to see this big conflict. Other people like Andrew is something which can help me a lot, because if it turns out that everyone who's over six foot three with a British accent is virulently opposed to, for example, the Electric Light Orchestra, it's an extremely useful thing to know so that I can make sure that's never recommended to me. So it makes sense to use other people's information in aggregate to help personalize things for me. And in many examples, that can really make things better. Recommendations of movies is an obvious one. And then when you start to think of information on the web, for example, if I like to browse news every day and we notice that I'm typical of people who perhaps in the mornings are very interested in policy related news, but in the evenings when I'm relaxing, I tend to like technology related news. So that's useful information to help make sure that I'm a happier person when I'm reading the news. So this is the upside of personalization. Personalization uses machine learning. Machine learning is exactly the technology which looks at data, figures out the patterns to usefully say what would other people like Andrew want and what the definition is, what it is for someone to be alike with me or dissimilar from me. It's the thing which powers ads in Gmail. It's the thing which powers movie recommendations and it's the thing which helps the personalized medicine initiative figure out how to treat you who probably need different treatment from someone else. And now I'm gonna go through four examples of increasing squirminess of why this stuff is hard, why privacy and personalization actually start to conflict with each other. The first case is one where I don't think we actually do have any trouble with policy. It's a simple case of what we'd like to think that society is going to do. If someone publishes unauthorized data about me, they are breaking the law and that should be remedied. That is the simplest case and the responsibility there in a good company or a well functioning government is you actually have the legislation in place, you have clear rules and if somebody does, for example, look up the bank account of a famous celebrity just so they can blog about it, that person's gonna get fired. And in some cases, if the consequences are serious, there's a more significant penalty. Now cases two, three and four are ones where it starts to get a little fuzzier. Case two, someone uses your data in a way that you didn't expect but it turns out you kind of agreed to it. And a famous example is a firefighter in Everett, Washington who was suspected of actually starting fires and one of the ways in which the police really got to understand that this was a serious risk was they then went to their grocery coupon supplier and looked at the things that this particular person had purchased in the last couple of months and they found a huge number of fire starting kits from that. In another case, someone who was suing a supermarket for a slip and fall accident, part of the supermarket's defense was they produced sales records for that person showing that they were buying excessive, in their eyes, amounts of alcohol. Now those are not actually illegal, both of those were covered under the terms of services and all terms of service and also the laws of the land regarding law enforcement use of data. But that's difficult. At that point, we've already hit something where the general public is going to be very uncomfortable and it's the thing which means that we all feel uneasy when we sign these terms and services. Those are difficult ones but now I'm gonna get to the ninja difficult ones which are just beginning to emerge and make things very interesting for artificial intelligence engineers who are trying to do good but can quite easily accidentally do bad. But this next example is where we're using machine learning to really help people but inadvertently, accidentally, the machine learning system starts to look like a bigot or make decisions which most of us would think a reasonable human would not make. And a good example of this is from a member of Jim Garrett's faculty, the School of Engineering at Carnegie Mellon University, Anupam Datta, who showed a little experiment with Google's advertising system where he looked at the ads which were shown in response to a query about job searches and he used Google's personalization system to give exactly the same queries to Google when the revealed identity of the user was male and when they were female. And horribly, it turned out that the ads showed when the person was revealed to be female were for jobs with lower pay which you look at that and anyone would think if that machine learning algorithm was a person, they're both a jerk and in fact they are doing something illegal. Just this morning, an example with Facebook who are introducing an ethnic affiliation term in their advertising system has fallen afoul of a very similar issue. Now why would a machine learning system do this? None of the engineers were, at least I know very much assume that none of the engineers had any intent of causing harm. The reason was the machine learning system had just observed in the data prior to this that all else being equal which is a very dangerous phrase to use. It was seeing that the women who were clicking on ads tended to click on ads for lower paying jobs than the men. So this machine learning algorithm which we humans built has got a kind of defense. It can really say, I am just showing people what they're most likely to click on. It's not my fault if society is set up in such a way that my data is showing that women are clicking on lower paid ads. Now this is complicated and I don't have the right answer for you. If it helps, I should notice that this experiment is particularly unrealistic in the sense that it's very rare that a machine learning system only sees an identified gender. Usually the machine learning system sees many other things about a person. They actually have the past history of the kind of thing that that person wants to do, other interests about that person. And so it will actually you find that there are other features of that person much more important than gender or race for showing their particular interests. It still makes us feel uncomfortable. So that is what I would regard as the most difficult part of machine learning and personalization at the moment. It is very hard and I do not know of a piece of research that I fully trust to prevent these things from being, if you like, bigots. Finally, I'm gonna mention the Ninja Hard case. And this is pretty simple. It is the case that if you really want to preserve privacy, you can cost other people their lives. There are examples of this in many law enforcement situations, but another simple one is in medicine where if you're involved in a drug trial and suppose you had 20 hospitals all trying out some new drug treatment on 20 different patients, then it is definitely in the interests of those patients for the hospitals to pool their data, to actually share data with each other so that one central body can do the machine learning with a large M for statistical significance to find out if the system is working or not. Now, if you decide not to do that, but you're so worried about privacy that you're going to not let the hospitals reveal details about the patients to each other, then you can still actually get some statistically significant results as to whether the medication is affected or not. It's just going to take you considerably longer and many more patients will have to be in the trial and you'll have to wait longer before you get the answers. And Matt Frederickson, a computer science faculty member at Carnegie Mellon has shown some very clear cases of the actual analysis of privacy levels versus lives saved or years of life saved. And unfortunately, and it's exactly what this room doesn't want to hear, it's a trade-off, there's a trade-off curve there. It's almost certain in my mind that we don't want to be on either extreme end of that trade-off curve, but we do have to decide where we are within the center of it. So hopefully we're squirming. I've tried to show you that no extreme position on personalization is good screw privacy or privacy is good screw personalization. Neither of those extreme positions are useful. We have to use our technological smarts and our policy smarts to try to find the right place in the middle. And that's the setup for this panel discussion. Thank you. At this point, I would like to introduce our panelists, Uet Tham from the law firm of Sydney Austin, who is an expert on cross-border compliance and international agreements regarding data use. If you want to come up to the chair. Paul Timmers, the director of the Sustainable and Secure Society Directorate at the European community has been head of the ICT organization for inclusion and e-government units. So we have experts here from Asia and from Europe who are helping us discuss this issue. Next, I'm pleased to introduce Ed Felton, a hero to all computer scientists because he's a computer scientist and the deputy director of the White House Office of Science and Technology Policy who has been leading a bunch of intense strategic thinking about artificial intelligence over the next few years. And then I would like to introduce our moderator, Ben Scott, the senior advisor from New America who is the senior advisor to the Open Technology Institute and also a non-residential fellow at the Center for Internet and Society at Stanford. Good to meet you. Thanks. Thank you very much, Andrew, for that introduction. We're gonna jump right into a discussion with our expert panelists who, as you see, strategically represent different regions of the world and so can offer perspectives on these questions from across the globe. If I may quickly summarize the policy conundrum that sits beneath the cases that Andrew laid out, it is this. Machine learning and AI benefits from the personalization of data using in learning algorithms. Personalization requires large data sets to compare individual cases to lots of other cases. It requires the collection and processing of data at a large scale. That raises two key questions. One is what are the rules governing the collection and processing of data for commercial uses of AI and raises key issues for what are the rules for collection and processing of data for government use of AI? Underneath that sits the basic question of algorithmic accountability. If you decide that it is unacceptable to have an algorithm that reflects gender bias in employment practices, how do you regulate that? And if you decide you regulate that at the national level, how do you coordinate that at the international level when data markets are global? These are the problems that we are all facing in government. And I think it's fair to say that the technological reach of machine learning and artificial intelligence has exceeded the grasp of our policy frameworks to contain and shape these new forms of digital power in the public interest. So what I'd like to do is start with setting a baseline of where different parts of the world are coming down on these issues. And what the building blocks look like at the regional level. There have been lots of efforts in the US to address these questions. There have been lots of debates in the European Union to address these questions. I would say less so in Asia. I'll be interested to hear more from you about what's happening in Asian markets. But I wanna first begin by allowing all of our panelists to speak from their own perspectives about what's happening in this field in their region, what is the approach to regulating or establishing a policy framework for these most difficult questions, big data collection and the application of artificial intelligence. Maybe I'll begin with you, Ed. Okay, well, first I should start by with the disclaimer that I'm not a lawyer and so do not treat me as an authority on US law on this issue. But I can talk about the policy approach that has been taken in the United States and it is rooted out of the longer term policy approach that the US has taken with respect to privacy. And that involves generally regulation of certain sectors where privacy is particularly salient, whether it involves things like healthcare or practices related to credit and employment and so on. And it also involves a broader consumer protection framework around privacy that is rooted in notions of notice and consent. And so we have a framework for privacy which the US has used and is continuing to use and that involves both laws and it involves also enforcement of those laws. When it comes to the particular issues that are raised by AI and machine learning, there are a bunch of things that have been done. And I'd point to in particular over the last few years at the work that the administration has done on big data and then more recently on artificial intelligence. On both of those areas, and I think they're tightly intertwined, the administration has engaged in a series of public outreach activities and then published reports. The idea being to try to drive a public conversation about these policy challenges and to try to move, both to move the debate about making rules and making policy in a fundamentally positive way but also to heighten the attention to an interest in these issues and to try to drive a public debate because I believe strongly that the institutions, the companies that are collecting data and using it in this way, almost universally want to use it, collect it and engage in AI activities in a way that is responsible and positive and sustainable because I think people recognize that if you push the envelope too much that the public will not allow that to stand. And so we've really tried to drive a public discussion, we've tried to raise the level of dialogue and that's been fundamentally one of the areas in which the administration has worked. We also recognize the importance of, we also recognize the ways in which these issues operate across borders and the need to work with international partners and to make sure that as data flows across borders and as citizens everywhere encounter the companies, the institutions of other nations that we can work together reasonably and we have an international system for dealing with these things. Thanks, Paul. What's the view from Brussels? The view from Brussels. Perhaps I should put in another kind of disclaimer in a certain sense that is, I think if you look at what is happening in policy development, whether that is engagement with stakeholders, public debates like here or whether you go in the direction of official public policy or law and regulation, you have to put it actually against the reality of what is happening around technology and around the use of technology. So I think the examples that Andrew gave are really interesting and challenging. Your fourth example, the case where for example, machine learning doesn't have access to your personal data, even if it would be good for other people. Well, it's a very interesting case because you have to look at it. How could you apply today's frameworks, including law to that? So to a degree, law is pretty strong in the European Union based upon fundamental rights and we would look at fundamental rights but also over fundamental rights are not absolute. So the public health is one of those reasons that you can actually start using someone's personal data, also individual data, but with appropriate safeguards and that may mean that you put a challenge to technology. Can you anonymize, pseudonymize? Can you encrypt sufficiently? Can you use new technologies like blockchain so that you have accountability after the fact, after it has been used? So it is, I think, that dialogue that we are also very much looking for in the European scene must be said, fundamental rights are very, very important in the European setting. So if we say privacy, privacy is a fundamental right. As a matter of fact, we even split it into privacy from the perspective of the protection of your private life and the protection of your communication versus personal data. So there are differences. There's more than one fundamental right that is at play there. Based upon that, we have law but we have also policy development and it's a very actively moving field. For example, at the moment we are working on a policy initiative around the free flow of data and around platforms and precisely those are being put to the test by machine learning AI precisely by the questions that we have here on the table. Ewan, how's it look in Asia? Okay, so I'm not a computer scientist. I'm a lawyer. So I'm going to approach this from a regulatory perspective. And I think one of the challenges about Asia is that it's not even bifurcated just in terms of the laws and the regulations that are coming out of the region. I mean, in fact, when we talk about Asia, I mean, what do we really mean? Different people have got different views about Asia as well. But I think when you talk about privacy laws in Asia Pacific, I think the countries that come to mind as being at the forefront of regulations would be Japan and Korea and then to some extent, Australia and New Zealand. And then, following that would be countries such as Singapore and Hong Kong, Taiwan and the Philippines where they've got fairly new laws. I mean, some of them were actually put into place in 2012. I mean, Singapore is a country where I used to be from the Attorney General's chambers. And in terms of the laws, I mean, they are progressive, but the fact is that they implemented the privacy laws for the first time in 2012. So that again, gives you some idea as to the importance that they place on privacy. And then in the last category, you've got countries such as Indonesia, Vietnam and China. And so these are the countries where we call them privacy laws, but they're not really based on privacy, not individual privacy anyway. And I heard today a lot about human rights, how privacy is a human right. And for a lot of these countries, I mean, these laws emanate not because of a motivation to protect human rights, but although, I mean, a lot of it would be consumer rights. And I think some people would argue that, consumer rights would be to some extent, human rights as well. And a lot of these laws that come into place, I mean, the last category of countries, I mean, what is challenging about them is that they don't have a single data privacy regulation. And I tell my clients a little facetiously, but it's true to some extent. Sometimes the more laws a country has, I mean, I do a lot of FCPA corruption investigations, for example. And in the course of that, we take a lot of emails throughout the region, which is why we need familiarity with data privacy rules and regulations. And I always joke with some of my clients that don't look at the transparency index to see how risky a country is when it comes to corruption. Look at how many laws they have. The more anti-corruption laws they have, the more problematic corruption tends to be in that country. And it is the same for countries such as China and Vietnam and Indonesia. You find little bits and pieces of information. They refer to how privacy is a right of all citizens, but they don't really tell you how that's gonna be enforced. I mean, that is a regulation that you see in China. And so I think some of the challenges in Asia is just trying to harmonize the regulations for a lot of companies, a lot of our clients that are trying to operate and transfer data across borders. And you have a lot of, so Japan, for example, has got a new law that's gonna come into force in about two years. And that's probably the first time where they actually talk about data anonymization. In terms of all the other countries, I think the idea of artificial intelligence is not even something that in the countries that have seriously considered. There are things that you might see guidelines and introduced by some of the regulators, but again, these are just guidelines and there is no T to any of them. Let me pick out a point which I think is implicit in what you said, which is we've all described the approach of United States, Europe, variety of Asian countries to these questions from a commercial data privacy perspective. We're regulating the market, commercial actors gathering data, applying artificial intelligence algorithms to produce particular outcomes. But I think at the core of this question from a regulatory and especially from a political perspective is when you collect a lot of data and you begin to produce these outcomes, that is of interest to government and government access to data is inextricably intertwined with the commercial data protection regulations. The recent sanctions between the United States and Europe over the operation of American technology companies in Europe has to some extent been about commercial data practices, but ultimately it is rooted in US government access to the commercial data that is collected by American companies. So my question is, do you believe that even if we were able to find a harmonization, a standard for commercial data regulations that apply to big data collection and the application of artificial intelligence algorithms, machine learning, is it all undermined at the end of the day by individual country's national security interests and their unwillingness to give up any kind of access to that data from a government for national security or law enforcement purposes? I can actually just give a very quick example before we go to Europe and the US. I mean, China has got a provision where, it's one of the few examples of data localization. So if any information that relates to health, the medical information or health of the citizens has to be stored in servers in China. And another example is the Singapore Data Privacy Provisions. I mean, they don't, the Singapore government and all state entities are excluded from its provisions. So that's a very good example of where the state's rights come first. Yeah, perhaps building on that, I think this whole question about national security and sovereignty, perhaps you also have to generalize a little bit and all the interests too that are certainly governmental interest or should be interest for society at scale, which is safeguarding democracy. So I think one of the concerns, if you look at Merkel last week's speech at the Median Talk and talked about the transparency of algorithms of large platforms, and this is in order to keep consumers properly informed, but it's also, what is the kind of bias that may creep in through the algorithms in terms of the provision of news? And that's got everything to do with the way you execute democracy. So there's an underlying debate about avoiding that we get into a situation where democracy gets polarized into echo chambers and we don't have a real debate anymore. And that's also a serious interest, I think where you're talking about essentially norms and values to what extent are they shared internationally. Now, I think we can be optimistic and pessimistic about that. So if we talk about data protection, we have after all been able to make an agreement between Europe and the United States, even if we do not have exactly the same starting point as regards data protection, and let alone as regards national security. The privacy shield, I know it's going to be put to the test and that's how it should be. But nevertheless, we got a lot further than what we had at the time in Safe Harbor because we actually started to describe that area of access by government for national security purposes, through those data that are being transferred in the transatlantic context and the safeguards for that. So it is possible if you negotiate to make an agreement on certain types of issues, whether you can do that for everything and across the world, I think it's very doubtful. There are many places where norms and values don't work. So if we bring it to the field of cybersecurity, that's where we clearly see it. We do negotiate internationally about norms and values in relationship to cybersecurity, which is called everything to do with AI also. Are we getting very far? Well, only in little steps. So I think it's not a single type of answer to this question. There is a degree of progress between, let's say those that have a degree of like-mindedness, but there's also many, many areas where we should be rather reserved or perhaps even pessimistic. I think there are plenty of areas in which government access to data for purposes of national security or of law enforcement is relatively uncontroversial. I think we don't want to forget those. And of course, the international discussions around this issue have been going on longer than the conversation about AI. These issues are not simple, but I think if you look at Privacy Shield, for example, it is an example of the way in which it is possible for us to engage internationally and to get to a point where we can work together. As to these issues about fairness or non-discrimination, I think this is another area in which there is a broad alignment of interests internationally and in which I think there's a lot of progress we can make by working together. Let me present a more pessimistic vision and ask your responses to this, which is to me it stands to reason that as the private sector grows more sophisticated with machine learning technologies, collects more data, applies more powerful AI algorithms to that data. It will be irresistible for government to reach into those companies for legitimate reasons in many cases, but also perhaps for illegitimate ones to gain access to that power. The example that you raised of the firefighter buying arson kits, I don't know where you buy those or where you have coupons for them, but the idea that law enforcement may not only tap your phone calls or your emails, but should also look at your purchasing records or know your health data and put together a portrait view and compare you against others and calculate the probability that you may have committed a crime is an extraordinary development and one which I think governments in many legitimate cases would want to use. But what that says to me is that ultimately every country is going to want to control that data for themselves in their own sovereign territory. So my question is, number one, are we headed for a global data sovereignty movement where everyone tries to have data localization rules where the power of AI operated by domestic companies is used as a geopolitical asset? And second, if the countermeasure is algorithmic transparency, which I took to be Chancellor Merkel's concept, to what extent does that get you an outcome? Is that an effective solution? If Facebook turned around and said, okay, we'll show you how our algorithm works for news feeds, does that solve the problem? Yes, it reflects the actual user, the behaviors of users and reflects back to them the things that they are most likely to click on. Do you then regulate that algorithm and tell Facebook you have to change that algorithm and then how do you hold them accountable? How do you determine whether they have done so in a way that measures up to a particular standard? So I guess two questions, one is, are we headed towards a hard power regime of data localization in your view at the global level? And two is, even if we are able to use transparency as a tool to push back against excesses of AI, does it even work? Let me start by taking the second part of that about the value of transparency, which I think really goes to a desire for governance and accountability. And one way to try to get there to increase accountability would be to say, well, open up everything, tell us everything about what your algorithm is, tell us everything about what your data is. But here I think is a place where we can apply better technical approaches to try to provide accountability, to try to provide evidence of fairness or non-discrimination or certain accountability along certain dimensions without unnecessarily needing to reveal everything. I think one of the traps we can fall into in thinking about this issue is to think that this is a problem caused by technology which can be addressed only by laws and regulations. But I think it's important to recognize as I think the discussion today has that technology can be an important part of addressing these problems, that there are technologies of accountability and that we need to think in a creative way about how to put those things together. We also need to think I think about the ways in which forces short of legal prohibition can constrain the behavior of companies and authorities when it comes to the use of data to the extent that what is happening is known to the public, to the extent that there is an opportunity to provide evidence of fairness, evidence of accountability. That in itself creates a dynamic in which companies and authorities will often voluntarily provide that kind of accountability. We've seen that to some extent in privacy where companies would like to be able to make strong promises to consumers for consumer comfort but knowing that they will be held to those promises, you get a dynamic in which companies can compete based on privacy. To the same extent, if we have technologies and mechanisms of soft accountability that that can lead to, that can lead number one to a competition to provide a service in a way that's more friendly in order to bring people in. And it can also lead to the kind of accountability that occurs when some bad behavior is revealed. So I think there's a lot more opportunities there to use softer forms of governance and to use technology to try to work on that issue around fairness and governance. Paul, do you think the general data protection regulation is sufficiently flexible instrument for softer forms? Absolutely. Well, I think what Ed says, I find that really challenging because I think indeed technology needs to be invited to make things work really well like the underlying intentions like with the general data protection regulation. So if you talk about informed consent, even informed consent about automated processing, that's a real challenge for technology. And then you can bounce back and say it's impossible because these algorithms, we don't even know ourselves what's happening inside, but that's not adequate. That's not sufficient as an answer. Perhaps there are still other approaches and I think you are referring that there are other approaches where you can measure things like fairness, things like did you actually understand what is happening in the decision-making? And also I must say, going a little bit away from the monolithic assumption that for example, consent is a one-off notion. No, there is an interaction that you can continue to have and that's what the technology can mediate when you're talking about consent as the use of the data evolves. So I'm kind of optimistic about the reach that the opportunities that are there in technology. When you talk about localization, again, probably a nuanced approach to that is necessary because there is a real risk. I think you point to that, that data localization happens. It's happening already today and actually that we do not necessarily get an internet by country, but perhaps an internet by region. Kind of a balkanization of the internet. At the same time, we have initiatives going on and I referred earlier to the privacy shields. That's a way to avoid data localization and then we are talking about personal data. We have a free flow of data initiative going on to actually remove any undue restriction to the localization of data. And I think we probably want to differentiate which domains are we talking about? When we talk about a public health problem, like the rise of the threat of the Zika virus, I think we have a non-localized approach to that. We have governance systems like WHOs and the professional collaboration in the field of health that allows us to do big data, AI type of analysis on the data we are getting from Zika all over the world, as a matter of fact. So for me that's a pointer to it that in this debate we need to involve the governance that already exists. Almost any kind of governance institution that we have in the world that works will be exposed to the question of what do you do with data and AI? And why not make use of those institutions too? So that may be in a more differentiated way. It will not work in every domain, but there are certainly domains where it will work. Will it work, for example, for the data that we have coming from self-driving cars? I'm not sure. Yeah, we did not yet develop a regime. So perhaps a necessarily complex, but therefore differentiated sector by sector approach could be. And moreover, I think you learned from what you do in one sector for others. So that's not necessarily showing that it is impossible to come up with governance. Must be said that there's a strong plea, I think, in Europe also to come up with new governance approaches and also to say, not all governance approaches will work. So the real-time threat of cyber incidents may not be quite compatible with the type of governance that we have set up between people and organizations, which is relatively slow. And so we will also have to review the type of governance that we already have. And I think for Asia, I know this sounds a little self-serving, but I think we still need regulations. Because so many of the countries still don't have something that would be taken for granted in the rest of the world. And so for those jurisdictions that have the laws in place, I think the question is how is the enforcement and policy positions gonna be made in terms of the guidelines that are issued by the regulators. But there are so many other countries in Asia that still don't even have very basic privacy laws. I think at the end of the day, you still need those to be in place for the framework, at the very least, which is, and a lot of Asia follows the notice and consent principle that is adopted in the rest of the world. I think in terms of data localization, I mean, a lot of it is usually done for various reasons, and they usually not because of privacy. For example, Indonesia, I mean, they talked about localizing data, but the reason for that was because they thought that this was under this misguided belief that that was going to help improve their economy by localizing data. But what the government didn't realize was that that was going to put off a lot of the multinational corporations from investing in the country, and so they held back from that, which brings me to my last point in terms of, I think what we have seen is that with all the international or multinational companies that have set up operations in Asia, they bring along with them regulations that they have to follow because, for example, they are dealing with data from the European Union or the US, and because of that, they tend to follow the standard that set the highest. And so when you have consumers in Asia who see, hey, this is the way my data should be treated, and this is the way an international corporation would deal with my data and my privacy, then they start expecting that from the other institutions within the country. And so I think there has been a lot of that where the cascading of privacy, even though the regulations aren't in place, but you've got that economic pressure to a large extent. But I wanna put one more provocation to the panel, but before I do that, I wanna invite you all to start thinking about questions that you may have for the panelists. We're going to reserve the last section of this panel for audience questions, so start thinking about that while I put this question to the panel, which is about all three of you have now raised notice and consent. It is the basis of privacy law across the world at the moment, and yet, even before AI was already under fire, already under attack about whether it would ever be sufficient for various reasons. There is an argument that notice and consent is a sham because you're presenting a consumer with a 15-page document in nine-point type for a service that they want to buy, and no one ever reads it. They have no idea what they've consented to, even though they've been noticed. And once you click that box and hit I agree, all the rights that you had up to that point are gone. Not all, but many. Second, as we collect more and more data and companies become diversified horizontally across multiple product platforms, it may not know exactly what it is that they're going to do with your data. And they may not know to give you notice at that point, and at what point do you build in multiple notification points? I've recently had occasion to talk to a number of founders of new Silicon Valley startups, not just in Silicon Valley, I should say, but also in Europe where I spent last several years in Berlin, and data is the new value property. People are building companies based purely on the idea that they're collecting lots and lots of data. What they will do with that data, how they will monetize it, how they will pool that resource with other resources, how they will be acquired and integrated into the data properties of another larger enterprise, big question mark, but undeniably not a deterrent for venture capital flowing into those companies. Once again, draws into the question this basic notion of notice and consent. If we come into a world where data is pooled intentionally in a fashion to maximize the utility of personalization, it might not even be reasonable to ask a company to predict in advance all the ways in which that data may be used, and they may not be the only ones who gain access to that data and use it for purposes that may benefit or harm the user. So my question to the panel is if we root the idea of an international standard on privacy policy as it applies to big data and algorithmic accountability on an old framework of notice and consent, are we setting ourselves up for failure from the beginning? Okay, by all means. So I think notice and consent is not that at all. What we are challenged to do is to make sure that consent is informed, meaningful, freely given, that there is a choice, and that's simply not often not implemented. The 40 page contract is not meaningful. So how do you translate that into something which is meaningful? Besides must be said that in order to process your personal data, it's not only consent that may be a ground, a law for ground, but as the General Data Protection Regulation said, at least in Europe, also other legitimate grounds. For example, public health is one of those. And so we issues around public safety, security, et cetera. Maybe grounds to process personal data without consent. So there is actually even the legitimate interest for direct marketing purposes may constitute, as the law says, may constitute a legitimate ground to process personal data. Now, the question is how do you interact? Because in all those cases, you still will have to interact with the data subject, the one that is providing the data. So how do you do that in a meaningful way? And I must say I'm still a bit puzzled why interaction with the user is a problem. From a company point of view, you would often probably say, I rather interact more with the user than less because each point that I interact is another opportunity to engage in the discovery of value and the delivery of value to differentiate. And I may ask a follow-up. What if the user is dead? We're soon into a moment in our history where there is terabytes of data out there about people who are no longer living. And yet that data will undoubtedly have value to the company that own it and to the governments that may gain access to that data. How do you deal with that? Yeah, so you will probably be thinking of a case where you would like to invoke a public interest. For example, public health. And again, certainly the European law says that if it is about such public interest, we can actually use that as another legitimate ground to start processing data. So there are possibilities, but really big technical difficulties I think, or actually the underlying difficulties that have to do with algorithmic accountability, which is not resolved. Or actually we have a broad debate with the health community in Europe. The radiologists are saying, what do I do with all those data that I have that I now start to put again under data protection? And how do I make sure that, for example, the right to be forgotten can be applied to that? So they're really serious, I think implementation challenges, and they will not always have the most ideal answer. But in a certain sense, we are looking them back into a legacy, a legacy that we can improve as the interpretation of the law evolves. So all of the communities that are involved in personal data in Europe, for sure, are also really called upon to look at what the technology and the law makes possible and provide their interpretation of that, a common interpretation rather than a fragmented one. And that's clearly a challenge that still needs to happen from public administration to radiologists. Ben, an answer to your second question actually about, I mean, there are a number of laws in Asia where if the person, the subject is dead, the concept of privacy doesn't apply anymore and the law doesn't protect that data. Open season on that data. Yes, unfortunately. And I think, just in terms of the other question that you raised about, notice and consent. So I say this a little facetiously again, but we used to joke, we can draft these consent agreements and put in as much as we like and no one is going to disagree. Everybody will just click agree. And I read this book called Future Crimes. I have to say after reading that book I refuse to load apps on my iPhone to the extent that I can. It is very difficult to live without apps, but I probably have one of the fewest apps in the whole of Asia on my phone after reading that book. But I remember some of the statistics that I saw in that book about how, I think the privacy policy for Facebook is double the length of the US constitution. And I think it was either PayPal or eBay, I can't remember which company, where the privacy policy is longer than Hamlet. And I've given presentations, a lot of presentations in Asia about privacy and data security. And I've always asked this question, how many times have you actually said, I don't agree when the privacy policy pops up? And in all the presentations I've given, only one person said, put up their hand and that was a lecturer from one of the universities. And it's just like an academic, isn't it? To some extent. But I think most people don't, they'll just click yes, because they don't have much of a choice. It's not because they don't think it's important and it's not because people in Asia don't value privacy, but I think the difficulty is that there aren't many avenues for them to seek redress. And because you don't have the concept of caste litigation and it's not a litigious society in general in Asia, it's very difficult for individuals or consumers to get together and to change the laws and the policies. So this is a fundamentally difficult issue, right? The uses to which some data may be put may be extremely complex and the implications of those uses for a particular user may be even more complex. So if we were to start with the principle that something should not be collected if its use would not have been acceptable to the user, it's not at all clear how you could put that into effect in practice, right? We know that telling users every last detail of what will happen and every last detail of the implication, asking them to read that before they disclose anything ever is not practical and that's not the way people behave. Now that said, there are a few strange people like academics and privacy lawyers who do read these things and there are people who have built tools that look for changes and so on and analyze them. So if a company does change, it's very long, longer than Hamlet but not as interesting in privacy policy in a way that is relevant. There is some reasonable chance that will be noticed and it will trigger some public debate off of that. So I think that there are methods of accountability other than all the users reading all the things which we know doesn't happen. But still it's a fundamentally difficult question and if we were to offload that decision to someone else, we don't make it terribly much easier to figure out what the right answer is as to which uses would be acceptable to the user or which uses are socially beneficial. Maybe the issue is it's got to be meaningful notice and meaningful consent. And I think one of the things that I've, just from a policy perspective, I mean that because the notion of these notice, these consent agreements is that they shift everything onto the individual consumer who doesn't really have the ability to reject the terms. And so I think just in terms of the policy, when it comes to AI and all the other provisions, I think it's important for the governments to actually think about shifting a lot of their responsibility back to the corporations for self-assessment and things like that. But I'm wondering if you also cannot start splitting it up because in the sense that, especially if it is about automated processing, you have to explain the significance and the envisage consequences for the user and the point I think that we're making is, first of all, it's very difficult for a user to understand and read all about that. And fundamentally, it may be very difficult to say that right at the beginning. But still, that raises the question, why would you assume that it is only at the beginning that you ask for consent? Why don't you have a repeated approach to interaction with the user as the system actually also develops and learns and draw the consequences? So at that moment in time that the consequences become relevant, you could ask in a number of situations, I'm not saying always, number of situations, you could ask for consent again. And then it's an immediate impact which may be simpler to understand than a whole long text about anything that potentially could happen. That's a simple answer to that. You talk to any of my clients and they will want to make sure that they get all the consent right up front because they don't want the obligations of having to go back to the consumer or to the customers. And so usually when we draft these policy provisions for them or these agreements, they tell us right up front, well, can we make it as inclusive as possible? And so that is what we do because if there is nothing to prevent us from doing that, then why not? I think that's a difficulty. I mean, I think what you're suggesting is something is thinking of it more as a matter of user interaction design or user experience design, right? Rather than perhaps asking for everything up front or trying to get extremely broad consent or ask for extremely broad consent up front that you might ask for some consent initially more later, how and when you do that may be difficult depending on the nature of the product and whether there even is a touch point with the user that comes later. But certainly I think thinking of these notions of consent in terms of user experience design, user interaction design can be a fruitful way to get closer to a strong notion of consent in a way that is less burdensome on the user. I just go back to that point of the floor. I think a lot of consumers, they don't really need to know the algorithm or to understand that what they want to know is, what is the different way in which you're gonna use the purpose in which my data is gonna be used, not so much the algorithm because I've heard that excuse before where some companies say, well, there's no pointers explaining the algorithms to the consumers or the customers because they're not gonna understand that. But that's not the consent they have, it's the change in use. Quick follow-up for you. Is it possible to have a discussion in the abstract about the Noticing Consent Regime without looking at the market concentration in many markets for digital products and services? Because if you're choosing between two or three mobile phone companies or two or three search engines or two or three social media platforms or mortgage lenders or hospitals, asking someone to opt out because they disagree with the consent provisions is inviting them to not participate in modern society. I think is a relationship between market structure and privacy policy, which in many cases is definitive that you don't really have a realistic alternative other than the consent. True, true. It must be said that I thought it was interesting and what was going around in Twitter because I don't know who attached a statement there from the FCC having just issued some privacy guidelines for internet access providers, which actually describe the situation when there is no choice. So they say you have to be additionally careful when there is actually not a real choice, which may be the case there. So I think there's a certain sensitivity around giving the notion of fairness, which includes the notion of choice. So let me at this point invite all of you to raise your hands. Tim, are we passing a microphone around in order to get everyone on the recording? So I'll just start over here and work my way across the room. Could I identify your name and affiliation before you give your question? Then our panelists will know who they're talking to. Thank you. My name is Jim Garrett. I'm the Dean of the College of Engineering at Carnegie Mellon University. And I want to come back to what Andrew started with was this idea of privacy versus personalization in conflict. It would seem that the last part of this discussion raises the question, why don't we apply personalization to privacy? So the only choice being I had to take one blank consent form and see either that or nothing. Whereas if there was some way for me to fill out a privacy profile so that it described what I wanted to share, what I did want to share, how I wanted to share my data, could that not then be applied against whatever the company is saying is their privacy policy? And so I don't have to read every one of them. I simply spend the time saying what I'm about and let the interaction happen. It's more like a personalization applied to privacy. That seems to remind me of the point you brought up about how competition in the private sector can potentially mitigate against abuses of privacy policies. Maybe this is a question you can respond to. Sure. And there are a couple of avenues that I think come to mind here. One is this idea that a user might check some boxes or slide some sliders in the user interface and give some idea about their preferences with respect to privacy. And then there would be either an enforcement of that on the user's behalf or some kind of automated negotiation between the user's technology, let's say in their browser or app and some in a company's technology. So that things would only happen within the bounds that the user had said were acceptable. And there have been various attempts to build those sorts of technologies. None of them have taken hold for reasons that I think are largely contingent. It could easily have turned out that such a thing had become popular, but for reasons too complicated to go into here, I think that has mostly not happened. The other approach is one that takes more of a machine learning kind of approach where you're trying to ask the user a relatively limited number of questions about specific questions about what they want. And then you have a technology that on the user's behalf tries to infer what decision they would make in other cases. And I think that idea of a sort of personal privacy assistant that operates on the user's behalf is one of the technological vehicles that could develop. And again, you have questions, sort of contingent questions of technological development that may make that more likely, may make it easier or more likely to be deployable. But certainly that I think is one direction in which users may be able to put technology to work on their behalf to manage this stuff because the complexity of these choices, if the user has to make every single detailed choice is just too much. Just one more piece. So I think it's a very interesting idea and the question is, will it hold against all of the four cases of Andrew? And it may not hold against the third case, which is I think you can use the personal data but it's bad for society. So personalization of privacy, would it, well, I think it needs to, it merits to be discussed but would it actually eliminate the risk of things being bad for society? Maybe we take another question. That's right in the back. Microphone is coming. Yes, it's Jose Coulomb from the State Department. I believe you only, you are focusing on part of the issue of privacy because there are other means of data collections that doesn't involve people clicking on the internet. When you go to a store, for example, Target, there are many cameras following you. You pay with the credit card. There's a lot of information using machine learning that they are gathering on these days. They are using those for many purposes. We had the case of Samsung with the smart TVs, listening to conversation from people. So how would you address those? That's a big issue probably as big as you clicking the ISM on the internet. Any responses? I think this gets to the issue of if you have a model based on notice and consent, how can you talk about consent in a case where collection of data happens in the environment such as with cameras or with microphones that are out in the world? And the cases that are in a public place are I think some of the most difficult here. If there's a product in your home which has a microphone or camera and that's turned on without your consent, that seems to be not a difficult case from a policy standpoint. But in a public place where there is not an interaction with the user where consent could naturally be sought, I think this becomes a pretty difficult issue and I don't think we have all the answers to those by any means. And for us it's also a real part of the debate because actually two parts are the fundamental rights, the confidentiality of communications and your private life and the data protection part. And this what you mentioned touches upon most aspects. So you may actually, that's also considered as a very important right that your confidentiality, where you're going around you are not to be tracked even if that doesn't necessarily and immediately involve personal data is still a right to be protected. So it's really part of the debate in Europe. Yes, right in the back. Hi, my name is Andrew Hanna, I'm with Politico. Some of you have talked about shifting responsibility back to corporations in terms of privacy agreements and others have talked about soft reforms of governance. I was just wondering in terms of shaping what data can be used. I was wondering if you could be a little more concrete and talk about tangible initiatives that could be undertaken on a policy level to allow for this to happen. Let me start. Well, I think this already is happening. If you look at the dynamics that drive the privacy policies of some of the large companies and the ways in which companies use data, there is a competitive dynamic that operates in which companies would on the one hand would like to be able to use data to optimize their business goals. But on the other hand, would like to be able to promise to consumers that the use of data is limited to things that consumers would find acceptable. And of course those promises once made have legal force. And so I think you see this operating already. It's inherent in a model of notice and consent that consumers will may either withhold notice or may take their business somewhere else if they don't like what's being done in a particular setting. And so I think this is a dynamic that operates already and it is something that is driven both by the enforcement of law, for example, by the FTC with respect to companies keeping their privacy promises to consumers. And it's also driven by some of the public dialogue and it's driven as well by the public debate and by some of the press coverage about privacy practices. All of those things I think push companies to try to make stronger promises to consumers which they then have to keep. I think it was one right in the middle. Yes, yes ma'am. Hi, my name is Carrie Ann from the Organization of American States. The question is kind of tied to the gentleman in the back that asked about other forms of data collection. Most of you would recall like when Ty came online in March earlier this year and what happened to her in terms of how she actually collected data and the result. In terms of privacy, there's so much open data that's available in blogs that is private that has some amount of personal data. Facebook, you have algorithms you can build that can describe data from all those sources that are open. How is that tied back to consumer protection if there's actually no obligation by the person who may be developing these new AIs that we don't know about that's actually collecting it? How does privacy really come in if we're actually pushing our data out there that's actually open to anyone to use? I'm just wanting your thoughts on that. Great question. Yeah, strictly speaking, if you are able to start re-identifying it becomes personal data and you still fall under data protection law. So you have to look at how far you push the boundary in using also open data to re-identify and the case that you mentioned is real. It's real. So that's where people have to take a responsibility or at least in the European situation they would be liable against the new influence to law. Hi, my name's Al Gombas. I work for the State Department. I'm curious if we were to create a scenario where we can negotiate the privacy restrictions, what might happen then I think might be that companies will incentivize consumers to give more data or give discounts or something in the event that they wanna get more data from the individual or consent. And I'm wondering how that might play out if you think that's a good idea, a bad idea, whether we should have a blanket law saying, no, you can't do that. You have to offer the same discounts to everybody regardless of what the amount of privacy they require of the company or not. And how consumers may be taken advantage of, for example, poor consumers may be in a position where they feel they have to give up more data just because they can't afford the service without it. I think there was a study actually that showed that consumers are, they prefer this giving some information and then having the ability to consent, if additional information or different users are going to be made off the data. And I think the other point that this study showed, I can't remember the name actually, was that consumers generally are willing to give more information if they get something in return. And I think that's, again, we go back to the notion of fairness I think, because one of the problematic areas that we have is that either the consumer or the customer doesn't know how data is being used or it's being used in a different way and no notification has been given. And I think the third thing is that the companies are the ones to benefit. They've been able to monetize the data or use it for marketing reasons, but the consumer hasn't actually benefited additionally from that different use of that additional information. So I think at the end of the day, I mean, we go back to notice and consent, and not necessarily right at the start of a relationship, but perhaps as that relationship progresses. Perhaps if I can add something to that, because for me, there are two dimensions in it. One is indeed, do you provide fairness in the perception of the user while the data is being used? And a number of people are saying that's not the case because you get disproportionately a lot of value out of it and you don't give part of that value back to me. That's one part of the debate. The other part of the debate is does the consumer actually have really a fair choice right in the beginning? So if there is actually a de facto oligopolistic or monopolistic situation, and look back again at the statement that the FCC did last week about access to the internet, you cannot be forced. It's essentially, I think what they are saying, you cannot be forced to give up your browser data, et cetera, your browser preference data. Otherwise you don't have access to my service. And there's not so much choice in that service, in that internet access service. So somewhere there's also that aspect of is there a reasonable balance the moment that there's an essential service being provided versus the use of these personal data? You cannot start excluding people from having access to an essential service. It is not that different from regulations where you need a government to step in to start the ball rolling, where if none of the internet providers are actually, I mean, I think it's gonna be quite difficult in some sectors to wait for the companies to take the initiative to regulate themselves. I think this is one of those issues where you have to have the government step in and just start the ball rolling. Yes ma'am, right here in the front. My name is Erika Basu. I'm a PhD student at American University. My question is about the notion of democracy in all of this. And while we are speaking quite to a room full of people who have a fairly good idea of some of the terms that we're using like notice and consent and terms of service and data privacy and AI, I'm just wondering what this all means in terms of access to even this information about what these terms are. And is it just a conversation between policymakers and corporations who have access to these definitions? Or is it really a conversation that you're having with the users who get affected? Great question about literacy. I mean, I think you see in practice, you see a lot of discussion, a lot of chatter among policy experts and you see more occasional flare ups of direct public interest in some of these issues and in some of the practices. And as is often the case in governance, the elites are sweating the details every day and there is a corrective of the public noticing something that seems quite wrong to them and speaking up loudly. And I think that is how these things often do operate. The certainly, and we do certainly see those flare ups of direct public interest from time to time. One of the points in the debate in Europe is also whether machine learning AI should also be made more widely available in kind of an open AI type of environment, which actually could be quite an interesting point for international cooperation. So that's kind of democratizing the tools themselves. Yes sir, front. Thank you. Daniel Reisner from Israel. My question is Ben, you mentioned the phrase, you mentioned old frameworks when we were discussing this issue. And one of my questions relates to one of the oldest frameworks we're using which is the concept of a state in the framework of this discussion. Because we all realize that we've globalized every element of the discussion. I mean, the data in spite of localization efforts is globalized, companies hold data. The same piece of information is usually split between two, three different locations on the same server and some of my clients split them up over different continents so that you don't actually get the same piece of data in any one location anyway. And the company holding the data is actually multi-structured and it sits in 25 different locations as well. And so on the one hand, the data is globalized. The players are globalized. And that raises the question, what is the role of the states? And I'll give you an example which I faced relatively recently in Israel. The Israeli government decided, called me up one day and said that they wanted not the old government but parts of it. And they said to me that they had decided to regulate an international cloud services provider. And I asked them, why do you think you should regulate them because they're not an Israeli company. They're not active in Israel per se, although you can buy the products online, et cetera. And I said, oh, it's very simple because they offer the services to an Israeli government entity. And I said, but the cloud sits somewhere in Europe, I think. And the company is an American company, et cetera, et cetera. And they said, yes, but the service is being offered in Israel, so it's our job to regulate. And I pushed back and I said, well, if you want to regulate it for that purpose, then 211 other countries in the world could legitimately make the same argument because it's a global service, right? And I said, you really think it makes sense? And they said, we never thought of that, we'll take it under advisement and I haven't heard from them since. Now, the issue I want to raise is what do you think we should be doing? I mean, governments are still our main tools of policy but when we all recognize that Facebook has more to say about the privacy of its constituent elements than any government in the world with apologies to all government who present it, are we still having the discussion in the right form or should we thinking of a different mechanism where we actually have a discussion engagement with the right pairs and the right form? A very simple question. Well, you see something similar happening in the debate around cyber security which is considered by some very much a national issue but global companies are saying I want to buy the best cyber security in the world. I don't care really where it comes from but I need to have the best because I have a global company. Is that necessarily contradictory? I don't think in all cases, does it mean that you need to go for some form of global governance? Well, at least a form of international governance, yes because you need to have an idea of what is the quality of cyber security. So I think that demand supply side cooperation if I simplify that costly. Could be quite fruitful in case like this. So what are global companies actually asking when they talk about data protection and privacy and machine learning and how is that looked at from let's say both the perhaps more nationally determined cultural values around that. And I think there's also the plea in the community to make sure that ethics, the cultural value discussion is really part of the debate around AI and machine learning. So not only for academics but also for the institutions that are involved in that. I don't think you can get very far if you do this only nationally. Quick follow up question. I think his question is really an important one. Do you think there are any global institutions that could channel national interests effectively at least a mini lateral level? Meaning the largest number of states that are willing to meaningfully participate in a single standard companies. I guess it's not really organized or named a search but I point earlier to certain sectors in which you can start talking about the governance of data. And so you can build upon some of the existing governance that is there and make that more AI and machine learning aware use that. You do not necessarily need to invent something new but perhaps we do need to talk about additional well institutionalized let's me put that in a in between the word commerce institutionalized forms of governance that can tackle this. So some people there's an interesting proposal on the table of Nesta think tank and financing organization in the UK that talks about creating a machine intelligence commission that would work more on the basis of the notions around common law. So you let it evolve as you get the practice as you get exposed to the practice and that would really bring experience together. Other comments on this point. We've about five minutes left. I'm going to try to take a few more questions. Yes sir. Carl Landworth from the George Washington University and the University of San Francisco Privacy and Research Institute. Relying on lawsuits largely to control corporate behavior in regard to privacy. And so I'm wondering in that case people have to be able to identify harms and I'm concerned about ability in the context of AI and machine learning. I think we have that ability. That's a tough question. And it gets to some deep technical issues as you know. The question of why an AI system did a particular thing and what that system might have done had conditions been a bit different. Can be difficult to answer. But depending on what kind of decision it is that the system made or assisted in there are different legal regimes that may operate at least in the US and different burdens may, different legal burdens may apply to the company that is or institution that is making that decision with the help of AI. So I think it's more of a detailed question as to what kind of showing what kind of governance is needed. But I also think that to the extent that people are naturally skeptical of whether complex AI based decisions are being made in a way that is fair and justifiable. I think that to use these technologies in a way that is really sustainable in the longer run I think will require greater effort at being able to explain why a particular decision was made or to be able to produce evidence to justify the fairness or efficacy of the decision that's being made. It's not a simple issue, but I do think that it's not that in the public in protecting themselves and government in protecting the public against the sorts of harms you talked about are not without either legal or technical capabilities. Let me ask a question that sort of sums up several that I've heard so far, which is given the apparent weaknesses of notice and consent, but recognizing it's the tool that we have and recognizing the challenges of harm in identifying harm and adjudications. Is there a combination of tools that might be used that are rooted in transparency? What does this algorithm do or what is it intended to do? Therefore we can get a better sense of whether it is producing a harm or may produce a harm. And that harm should be or some approximation of that risk should be disclosed in the noticed regime. What is the combination of tools that might best produce a framework for handling these technologies as we move forward? Do you want to jump in on that? Yeah, I think that's a very good question and Ed's comment is just right, but there is something very interesting about when you're an AI engineer building one of these systems. It's sometimes very hard to diagnose why your system did something, but you always have to write down something called an objective function. When, for instance, if I decided tomorrow to release a program to help people navigate the streets of Washington in traffic efficiently by tracking everyone, all the cabs and all the other vehicles, if I write down my objective is for each user to get them to their location, their destination as quickly as possible, then even if I'm doing some fancy algorithms which I don't quite understand to accomplish that, I can show that to a lawyer or a policymaker. This is why my algorithm is pulling data from many people. On the other hand, if I supplemented a little bit because I'm getting paid by a coffee company to send people routing them past their coffee shops, then again, that will be sitting there in the code. So when you think about an AI or machine learning algorithm being written, someone says, well, they're so complicated we can't explain them. That's not a legitimate answer because when you write an AI algorithm, you always have to write the objective function. What is the thing that the AI system is trying to do? And so if you want companies to actually, or governments to be clear about what their AI is doing, it is legitimate to say, show me the objective function. Maybe we will leave it there with Andrew's optimistic vision about a possible way forward. Really appreciate that. Please join me in thanking all of our great panelists for that discussion today. Thank you.