 All right. Hello, everyone, and welcome. How are you? Let's try again. How are you this morning? A very warm welcome to the Birkman Klein Center to Harvard Law School on this sunny but windy and slightly chilly day. We're really delighted to have you here. My name is Urs Goster. I serve as the Executive Director of the Birkman Klein Center. I'm also in Harvard Law School faculty, my colleague that I will introduce just in a bit. And of course, I'm very, very pleased to moderate this special conversation about the ethics of digital transformation. And I'm, of course, even more pleased and actually honored to welcome the Federal President of Germany, Frank Walter Steinmeier. Hello. Thank you, sir. We are joined by a wonderful group of colleagues and experts. And I will, if you are not mad with me, I will just briefly introduce you and we will get to know each other a bit better as we go along and talk about your work. And of course, as we engage in an opening conversation. So this should be interactive as much as we can. Eva Weber-Gurska is a ethicist, the philosopher, currently at the Rural University in Bochum. He's doing amazing work. And I'm already looking forward to learning from you today. Quite often, these debates about ethics happen without having philosophers around us. I'm grateful that you're here. Matthew Liao is at NYU and is a professor of bioethics. He also runs a center on the same topic. And I will, I'm particularly curious to hear also some of the lessons learned from past cycles of technological innovation as we now talk about digital things and AI and IoT and the like. Shanette Hoffman, welcome back. Great to have you here. Shanette is a professor of internet policy at the Free University in Berlin. She's also the director of the Alexander von Humboldt Institute for Internet and Society in Berlin. We introduced the president already and he doesn't need an introduction. So next we have Dean Melissa Nobles, who's the Dean of the School of Humanities, Arts and Social Sciences at MIT. Really great to have you here. As we will hear more about MIT is building a new school of computing, Schwarzman College. Lots of interesting things happening there at the intersection of engineering and ethics. So looking forward to your thoughts and this conversation. Wolfgang Schulz, professor for media and public law at the University of Hamburg. He's also the director and now I have to read that because I still cannot remember it. The director of the Leibniz Institute for Media Research, which is known to me as the Hans Britoff Institute, but I learned this is important to emphasize the Leibniz part. Crystal Young, really great faculty colleague here, professor of law at Harvard Law School. Does wonderful work, important work on criminal justice and the use of algorithms and data in that area. We'll talk more about that. So as you can see, a fantastic lineup and of course I'm so grateful to you Mr. President that you joined this group as a participant and I get a sense already that you're ready to jump in and we'll take over the moderation function in your course which is totally fine and will make my job easier. So one or two logistical notes. First we will end at roughly 11.30. That's the plan. Some seconds of the conversation may be in German. You should have a translation. If it's okay with you I will continue to moderate in English and I think the reason is straightforward because my Swiss accent is so strong when I speak German that it's easier for the Germans. So with that, Herr Bundespräsident, here's the question for you to start us off. We met the last time in 2012 in Berlin and had a conversation about what does it mean to make good policies for the internet age. And I googled this morning actually and tried to remember what happened in 2012, right? It seems like in internet times it's more like 70 years ago than seven years ago. And when I googled things that happened there in the technology space was the Google Glass project was kicked off, the iPad Mini was introduced, Facebook went public and a bill was signed in California that self-driving cars are now allowed and they're regulated. So I'm wondering that seems to be a very different stage in our digital transformation process if you look back only a few years and now fast forward 2019. Where have we arrived? What are you thinking about? What are your concerns? What are your hopes and how does that connect with the topic of today? Well thank you very much indeed for these kind words of welcome ladies and gentlemen dear students. I think it is fantastic you know you've been given the alternative to enjoy a sunshiney day though somewhat windy late autumn but you've never that has decided in favor of the alternative coming here in turning closed room to listen to us. Thank you very much for that and thank you also for reminding me of the year 2012 which I remember well for quite different reasons and I'll get back to that in a second you know my visit to Boston in 1220 but allow me to begin by saying that I haven't been here for the very first time and I'm always happy to be back in this academic scientific center a center not only with regard to the United States of America but also in a much broader sense because it is a center that is exemplary in bringing together researchers and academics from all parts of the world from all countries of the world to make them work on subjects of common concern and when I remember that visit in 2012 but earlier visits too and later visits you know no matter whether we talked about foreign policy issues or other issues either here in the hall or in other places in Harvard whether we talked about questions to do with climate policy about the state of affairs of transatlantic relations rest assured every time that I came here I returned home having benefited to a large extent from the discussions I had at Harvard there was one exception just and that brings me back to 2012 really just once did I risk myself my whole existence here in Harvard because I allowed myself to be talked into throwing the first pit in a baseball mat and I was extremely naive never ever had I before attended a baseball match held a baseball in my hands nor been in a baseball stadium and my then colleague Secretary of State Condoleezza Rice we all remember her was aghast when she heard at what I was about to do and the only comment she gave to me was don't do it but you know that is typical of us Germans I had accepted and I didn't want to go back on my promise so once we entered the stadium on the afternoon at that day you know I got an inkling of why my colleague Condoleezza Rice was so aghast because the stadium was filled to the very last seat 40 or 50,000 people in the audience and I had a certain feeling that they hadn't come just because of me but they'd come because they wanted to match to watch the match they met that was the match really here in the United States the red socks were playing the New York Yankees and I realized all of a sudden that this is not just any match it's about religious issues really still it worked out somehow I survived it and having survived that experience I was happy to come back every time I was in shy of returning to Boston to have it but today it's a different topic really that brings me here different from the topics we focused on in the past years we're no longer on the threshold of the digitization of digitization of the digital age but we have already entered that age I've come here because the topic we will be talking about directly refers back to topics I'm focusing on in my presidency the future of liberal democracy that is how does the internet how do Facebook Twitter algorithms and non-nimity in the internet how do all these things change the democratic culture of debate which is of such great importance to us in Germany just as much as you do in the United States of America despite the daily waves of outrage that you have to live with how can we make sure that we keep general overview how can we distinguish what is important from what is unimportant unimportant and does this culture of thinking in simple opposites yes or no black or white harsh approaches whether that takes away from us our ability to see the nuances between black and white are we capable of doing that do we continue to be capable of entering into compromise which I believe to be vital for any democracy if we no longer have the time to differentiate or to see things in nuances in carefully weighing the pros and cons because it's no longer popular we talked about this yesterday in Boston with American and German academics in great depth today though we're again talking about digital transformation how that has changed our lives and daily experiences but as Mr. Garza kindly indicated we will be focusing on a different priority it's not really focusing on the question whether we need digital technologies they're there anyway no one is denying the fact that they open up enormous opportunities for all of us when it comes to fighting poverty for example when it comes to tackling the impact of climate change when it comes to combating diseases and their effects undoubtedly Germany is a country that has no resources over its own outside the humanities we want to and humane resources we want to be a country that has technology to offer and we want to participate in the developments they entail and that as a kind of introductory remark on my part as regards the topic we intend to talk about today the the ethic a code of ethics for the digital transformation I would like to just briefly focus on why this topic is so important to me I actually have come from two visits and I refer back to those visits I visited Stanford last year focusing again on the future of digitization when we traveled there a few days before we left we went in the papers that Elon Musk had bought up a company that was engaging in the research in brain implants and that was doing very well in that regard that this might help tackle diseases like Parkinson and Alzheimer's I learned a lot about the imagination of researchers during my encounters there how one can influence brain activities with the help of implants and algorithms this has undeniable and obvious consequences but at the end of the discussion the discussion it was an eye but it was someone who is very well known in the United States George Schultz that is the former secretary of state under Ronald Reagan who also is or was at the time a member of the board of Stanford University he said at the end of the discussion guys really it made I'm fascinated by the scenario you have been painting drawing of the future but let's not forget we are living in a democracy and democracy relies on independent self-determined confident human beings if it is to survive so and he addressed himself to the researchers and the academics so when developing these technologies don't forget to think of the consequences of your inventions and how that fits into democracy and its principles and my second trip and I'm going to be brief about this with my visit to China again we also focus on this topic and we also talked about show social scoring the opportunities the perspectives that result for the members of society the debates we had were not easy because at the beginning the Chinese didn't understand why we were asking these questions at all and why we would find some of these things complicated that come up in the context of social scoring because they said we have 18 90 percent of support popular support for these topics why are you against it you know we who we live under different political circumstances are scared and shocked by the idea of having to submit to his total surveillance of all our aspects of our lives that no matter what we do this might be linked up to a system that assesses our performance in a negative or positive way and that this of course has an effect on the way we developed as human beings that hopes wishes and dreams are becoming externalised that they are stored on a software I no longer have any influence over for our concept of individual responsibility and of personal freedom is being called into question by such an approach we however know that this is not a problem that is exclusive to the Chinese German companies American companies that invest in the United States that employ people in the United States they will be working under the very same conditions and thus we have to have an interest in what is happening here but let me close by saying that the debate in China hasn't yet come to an end yet it's still ongoing we don't know what will be the outcome the result of all those tests and experiments that are being carried out in China right now but the obvious question is on the table is there something like a minimum of morals for the digital age shouldn't we work to have something like that like a common expression of the limits of the digital future in the decades or centuries to come which brings me then to the question of whether we do not really need a much more intensive exchange of thoughts between the tech community the political scientists are about the philosophy of the individual that is happening at this point in time at least as I see it well if I you know if I were to choose I would very much like to be in a position where I could leave Boscon today having received the confirmation from all of you that I need not be afraid that I need not be concerned that the debate is taking place in the very intensity that I would wish to see attributed to it but whether that is the case or not now we will have to hear and see from you I very much look forward to this debate thank you so much for setting the stage so beautifully and I realized while you were start speaking that my American colleagues didn't have translation how do you follow it very nicely and I very much appreciated the baseball reference there were a couple of words in there I kind of got you really set the stage so well should I repeat it exactly but if a summer I heard Stanford right and of course you picked up on that which is exactly the segue to my question sure so the president was was putting some sort of the societal change that we're going through where technologies of different sorts play such a vital role in the larger context of the future of democracy and the question of how do we want to live our lives and interact with each other and shape our future and within that he also referred to as he picked up on a trip to Stanford and pointed out already and that's a theme I want to follow up on for a few minutes that there are tremendous opportunities although currently the focus is really on the risks of new technologies essentially in public discourse for sure particularly in Europe but before we go into risk mode and talk about all the pitfalls of these new technologies I would like to pause and really zoom in a little bit on this question what can technology do for climate change and other areas that the president mentioned against climate to address some of the big big challenges of our time and there is this other place closer to home MIT where many of these technologies are developed in the lab and I was wondering whether you would be willing to share maybe two three examples from also your humanities perspective that give you hope and optimism maybe sure good morning everyone well you know one of the things about MIT I kind of hesitate in a certain way to be able to say two or three since the institute is kind of connected to technological innovation so I think I'd rather say a bit about what is made MIT such a leader in thinking innovatively and a big part of that has been the commitment to collaboration across all five schools so it's a recognition that many of the problems that the world faces obviously global in nature they require knowledge from all domains that there is it isn't just a scientific problem it isn't just an engineering problem it isn't just an economic problem or a social problem it is all of these things together and part of our strengths have been putting together research programs to deal with these so we have for example the MIT energy initiative which brings get together professors from engineering science humanities social sciences the Sloan schools look at the economics and the business models of what is sustainable and what not as well as architecture and planning to look at the ways in which climate change and the way we use energy is changing how we structure cities so it is the scope of the problems and a commitment to putting intellectual energies that are commensurate with them that I think has set MIT in a better way for thinking about the future so I hesitate to say any particular except to say that the problems are so massive there is no way the technology cannot be a part of it right and the issue is how do we think creatively about technology to make sure that's happening and that's a big part of what education has to do to connect students to understand that the technology is an expression is a human endeavor right we created technology technology doesn't create us and we have to start with some basic commitments so that's where we are now and I look forward to saying a bit more later on about the College of Computing fabulous thank you so much it's very helpful some sort of an iteration on this theme and and you know taking your point where you argue well there is no future without getting technology right in a way that helps us to address some of these big challenges we face as a humanity but also to embrace the opportunities I was I was wondering if whether you would be willing to share your thinking around this topic much of the ethical debates of these days are focused on ethics in the sense of telling us what not to do right what lines not to cross and we will definitely return to that and this will be the key part of the panel but before before we go there I was wondering is there some sort of an ethical obligation for the good use of technology and basically a moral imperative that would almost be in contrast to the precautionary principle that's so popular in Europe these days and say no we have to double down on developing technologies for the social good and in the public interest how does a philosopher or a desist think about that yeah thank you for that question I'm happy to answer so I think there at least two ways to understand your question first we may ask if there is a moral obligation to generally use digital technology now that it has been invented in developed up to a point where so much concrete applications are possible but my answer to this would be no there's no general moral obligation to do what can be done because digitization is just a means and moral obligations refer to ends to purposes not to the way we get there and so it is an open question if digitization is the best way to get there where we want to go to our moral purpose which is as you already pointed out a good good democracy human flourishing and so on and we have just to see exactly where digitization is helpful and where not but on the other hand if you ask if we theorists like us here on the panel should point out possible positive users of technology more often I would say yes and that's important too because otherwise the development of digital technology mostly is driven by interests for financial profit and this is not the best premise for the best outcome in a moral perspective from a moral perspective so it would be surely good to have more people pointing out the positive uses but I think there are already quite examples for that too also where reflection and realization goes together for example at the Weizenbaum Institute where I was a fellow in summer at Berlin a young colleague and went and developed an app which enables people from different parts of the political spectrum to chat and discuss with each other online for example and yeah I mean there are a lot of opportunities and we should point them out but also I want to add that although these projects always have to be chosen carefully because yeah you mentioned already with climate change we can do good things for against climate change with digitization but on the other hand we also have to be aware of the fact that digitization all the digital technologies themselves are consuming masses of energy so it would be best to choose only those those projects which have a really urgent reason they have to be something important at stake that we invent and apply new technologies and I remember the British philosopher Derek Parfit saying that all people all humans with healthy but too healthy legs should use the stairs instead of the elevator in order to save energy because he said elevators are just made for people who cannot walk and a bit in the similar way we should always would watch out where are the urgent reasons that we invent on digital technologies for and what is urgent what's important always depends on the domain it's different every domain in medicine for example is diminishing of suffering in law it's justice in democracy is participation and well-founded formation of political opinion and only then when we have identified precise moral purposes and we see that we cannot attain them but by digital technologies then I think then we might be seen as obliged to use them wonderful great segue you pointed out some sort of the big questions but also that these questions can only be answered or worked through in a particular application context and you mentioned already and if I may to get a little bit more specific and put like the conversation and from 30 you know go from 30,000 feet a bit lower to 10,000 feet maybe and take two examples that illustrate some sort of the struggle how we embrace opportunities but also protect against risks and Matthew and Crystal as I already mentioned in the introduction you have interesting work that some sort of serves as a case study in our context Matthew focusing on health and public health and the role of technology whether it's AI or IoT how are some of these questions that Eva identified crystallizing and where do you see things going what are some of the concerns what's the state of play yeah so good morning everybody so as Professor Geyser has said I'm a philosopher and I have a book coming out called the ethics of artificial intelligence coming out next March and we cover a number of these different issues in sort of the ethics of AI and one of the applications of the ethics of AI is in the realm of healthcare there are actually a lot of really exciting opportunities and a lot of development a lot of things being done in the area of healthcare so for example machine learning is being deployed to sort of screen cancer cells it's it's found that it it's almost as effective as radiologists it's also being used in ophthalmology it's been used to screen to figure out whether a embryo is going to be viable or not it's natural language processing is being used to figure out whether people are having suicidal thoughts so there are a lot of really exciting developments that are currently underway and what that means for us is that it can really for example reduce sort of healthcare costs in the US I think we spend about three to four trillion dollars in healthcare each year and so one of the things that machine learning can do is reduce administrative costs in healthcare for example it can also assist facilitate in drug discovery and finally another example is it can really realize the vision of precision medicine so for example fitbits wearables to sort of figure out healthy lifestyles what you should be eating your calorie intakes and so on and so forth so all those are really really exciting developments I'm an ethicist so I also think about sort of some of the ethical problems and I just want to very quickly share some of the ethical concerns with you as well so one of the biggest challenges with machine learning is that it requires a lot of data and so what that means is someone's got to go out there and collect all these data and then you get into issues about privacy especially in the healthcare it's personal data that we're talking about so you know one obvious example is sort of Facebook and you know Cambridge Analytica collecting a lot of information from you know using you know from Facebook users another example is Claxo Smith Klein they just recently bought this company 23andMe which is sort of ancestry type you upload your information it gets your genetic information so now they have all the database and so one of the things we need to really be worried about is whether you know is this are they collecting the data appropriately are they violating rights what's what are the implications of the individuals another issue is going to be sort of the garbage in garbage out problem so you know the algorithms that we're using today are going to only be as good as the data themselves and so but what we're finding is that sometimes the data sets that we're collecting are not they don't have accurate representations of the subjects so take for example self-driving cars it turns out that self-driving cars they're not so good at detecting people of color because the training sets you know the training data that they use they don't have enough of you know sort of people of color sort of in the data set and so that's a problem when we deploy those sort of data sort of the algorithm in the wild and I'll just say one more thing the biggest concern I have with machine learning right now is there's something called deep learning and deep learning is actually a technical term it just means that there's sort of it's using a big network sort of to figure out you know you know what you know how a machine should act and and it's sort of powered a lot of the recent developments since 2012 is powered a lot of sort of the the new breakthroughs but one of the problems with deep learning is that it just doesn't capture the causality the causal relations of it doesn't really understand what it's doing and so it's it's sort of it's it's it's kind of it's linear regression it's a lot of math but here's one problem so there's something called generative adversarial network it's it's a type of single so one type of attack is something called the a single pixel attack so machine learning is very good at image classification they can take images and they classify them very accurately but science researchers have found that if you just take an image say an image of a car and you just take one pixel and you change it from black to white the machine learning will completely screw it up so for example with the image of the car it'll now classify that that image as a dog with 99 percent confidence and just imagine deploying that type of machine learning in the context of health care when sort of people's lives are at stake or in the context of self-driving cars right and so we uh I think we're going to get into more of these discussions later but I think that's where we have to be careful about rolling out these technologies. Crystal does that sound familiar listening to these stories from health when you look at your work on the use of algorithms and data and the criminal justice system or where are differences? Yeah I think there are a lot of similarities and I think as some of the other panelists have pointed out while algorithms are now basically used in so many parts of society one of the areas where they've had a very dramatic increase in usage is the United States criminal justice system and the algorithms here we often call risk assessment instruments because what the algorithms are trying to do is predict somebody's future criminality and these instruments now are used at various stages of the criminal justice system things from policing to pretrial and bail decisions to sentencing to probation and parole as well and just for some examples take predictive policing one of the common technologies is called predpoll which is used by the Los Angeles police department and over 60 other police departments across the United States it uses historical data on crime types where crimes have happened to predict the future incidents of different criminal incidents in sentencing now many states allow judges to consider risk scores that are meant to predict the future risk of committing new criminal behavior one common algorithm here which has been in the news a lot you may have heard of is the compass algorithm it's a proprietary algorithm so we actually don't know exactly the underlying algorithmic structure that classifies individuals on a scale of one to 10 in terms of their predicted likelihood of recidivism using how that person answers questions on a 137 question survey so these are just some of the examples and I think they raise a huge host of issues and challenges one that I think requires a lot of understanding from philosophers and ethicists is do these risk assessment tools have a role to play even in the criminal justice system I think some view the endeavor of predicting future risk as wrong-headed and believe that because of this input data garbage in garbage out type of problem that using algorithms to predict future risk will only entrench or potentially exacerbate inequalities and inequities that we see in society at large on the other hand and I place myself more in this camp while there's acknowledgement that the algorithms are often imperfect I think it's also important to consider the relevant counterfactual the counterfactual is not a world free of inequality of inequity it's a counterfactual in which we have human decision makers and guess what there's lots of evidence that human decision makers have a big role to play in perpetuating inequalities through bias and inconsistency so there's a role to I think to consider what are we comparing the algorithms to it's not a perfect world it's humans I think another set of design questions that Matthew has gotten at is in the criminal justice system there are lots of open unresolved questions about how do we design an algorithm if we're going to predict risk can we consider individual characteristics like somebody's race or ethnicity what we often call protected characteristics if you can't can you consider non protected characteristics things like education where somebody lives which can effectively proxy for a person's race or ethnicity there's also complicated questions about how to evaluate if an algorithm is doing what we want it to do what does it mean for instance for an algorithm to be fair it turns out here the law has not so much to say so far about how to define or measure fairness and even outside the law there's a very lively computer science debate about algorithmic fairness where there's very different definitions of fairness that in many circumstances we could all say that one sounds great or that one sounds great but it's actually been shown mathematically that in many instances it is impossible to simultaneously satisfy all those notions of algorithmic fairness and so then that requires a normative choice by us as a society or a legal system to choose which of those definitions of algorithmic fairness should dominate so I think those are just a couple of the key issues and challenges that I see in the criminal justice system no shortage no shortage absolutely it was clear for both stories I'd love to to build up on this and ask Wolfgang Kristl made the point that part of it is a story about technology but part of it also seems to be a story about society at large about the institutions we already have in place part of it seems to be about human nature with our own biases so how do you think about that as we have this intense debate about debates about AI decision-making and versus human decision-making and you know should we replace judges by AIs or not how much is it about technology really? I think to to respond to that I have to go to flight level 10,000 again I always say but I'm descending later probably in a good way in a good way yes I hope so when we talk about technology in in expert circles but society at large then we very often have a distinction between here is a society here is the technology and that is a dangerous thing because then we frame technology as a kind of natural disaster that is coming and we have to build walls to cope with that and we are not in the mode of of creating the technology as a society and together with different disciplines so I think we have to be very careful how we talk about these things and where we talk about tensions and I can I think build on what Chris said because we are doing some research in the criminal justice system as well and I've done recently and what I find interesting is that when we talk about technology coming into processes then we start thinking about what our quality measures are as a society here and I had a discussion with German judges a couple of months ago and we are talking about sentencing and we are talking about AI supporting that and then I raised the question of explainability which is one of the issues in AI that we say we cannot really see and explain what happens there and then one of the judges said wait a moment ask me can I explain what I'm doing when I come to this decision and I'm not sure that I really can do that I can give a reason that is valid in the legal system but I cannot really explain what what what my motives were here and then we had a debate on what are the factors there and then German legal system and the criminal law it's not very well elaborated what the criteria are so it's very very vague and so we had a very fruitful debate on what the values actually are and you can have the same in other fields of of society we have next week or the week after a workshop with computer scientists and people from from communication science and law talking about how to understand diversity in recommender systems for the media and we want to come up with ideas what that actually is and then you have to go back and do you want as a society actually when you talk about diversity so I think that's a good thing that technology forces us to ask this hard questions about societal values and to better understand what makes human decision-making so special we are talking a lot about things like tacit knowledge and tacit norms things that we all understand because we are part of the society and we cannot really explain why we do that this way or that way because it's kind of tacit knowledge or tacit tacit norms and that is something that you cannot really now I would say built into technology that would require technology to be part of society and learn in interaction and I think we are far from from that so far so I believe that this is a twist of the debate that very often we do not really include in our conversation when we talk about this society here technology their aspect if Wolfgang is right and he's most often right and as we know that and and technology is deeply embedded in society and as we heard the president opening in his opening remarks we as societies are in a learning process ourselves how to cope with massive challenges and transformations of all sorts and based on the work you've been doing following early debates around internet regulation and approaches to governance what's currently happening in this societal learning process as we try to identify and agree and you know regulate good uses versus bad uses across different contexts and we only highlighted two examples and could add many more what what sorts of norms are emerging and what's some sort of the dynamics around these norms as you observe it thank you I could talk for hours I really like it let me step one let me go one step back at the time when the internet and digital technologies switch off the mic so on the mic you're on the on the year I'm great on the side of okay around the time digital technologies really became more present in our societies Western societies went through a long period of privatization and liberation from old state monopolies and we thought of the the the force of the internet as a form of liberal liberalization and that kind of idea of self regulation and let the markets determine the future we thought that this was a very good alternative and this we have sort of driven to a point where we now regard digital technologies only as a nearly as a self-driving autonomous force we ascribe a lot of power and agency to digital technologies themselves and the companies who develop them I would say that the debate we see now about AI and ethical frameworks is sort of an echo of that the idea that ethical principles might be good enough to give us an orientation for the future of artificial intelligence but we need to ask ourselves whether we get enough accountability out of ethical guidelines and frameworks I just came back from the west coast where you see a really a change of wind companies now begin to wonder whether they do not need a legal framework for the future development such a legal framework could be for example anchored in human rights and legislation could build on fundamental rights they could sort of set limits to future developments also to make us see that finally it is society that shapes technology it's not that technology sets its own rules but we are not really aware of it I think at the moment we nearly have lost the capability to see and to recognize how we change technologies as societies so we need to sort of perhaps turn around a bit give up this idea of complete self-regulation and come to new models that sit somewhere in between a market approach and a pure government approach we need new regulatory frameworks that need to work across national boundaries even though we can I think not hope for multilateral approaches we need something below and the GDPR the general data protection regulation that the European Commission introduced is often mentioned as a gold standard for that kind of approach perhaps some countries can get together build a legal framework and export it via trade agreements good advice thank you so a couple of things that I would like to follow up on one is this role of the ethics principles you mentioned there is a flourishing of ethical principles around the AI particularly I think 130 or something are out there we tried to map some of them but it's getting quite the task but on the other hand side given also Wolfgang's remarks and opening statement by by and bonus president there is value to these ethical debates nonetheless right and you also make this point of course that we need all different approaches and tools probably including a law but also ethics and and if I may ask you how do you think about these ethical principles what what's some sort of the value when these ethical norms crystallize in guidelines and things like that whether it's for companies or enacted by international organizations like OECD or even by nation states what's the promise but also what are the limitations of ethical approaches of this sort when we deal with these complex messy problems yeah so ethics and law of course have to be distinguished although they are connected ethics is I would say the explicit formulation of implicit norms that guide or should guide our ethic our everyday actions in our life are living together and law it's the core of an of the organization of a state or a nation transforms some of these norms into concrete rules the infringement of which then is bound up with sanctions by the state so this is something different and not all moral norms are legal norms and vice versa of course but ethical guidelines now for new topics like digitization can be I think helpful first steps to show something that then can be transformed into law too speaking of law what's your hope that looking at your area of research that the law will some sort of evolve in this dynamic situation where maybe ethical principles may lead the way where do you see the promise of law in these debates where you know we're facing this shift from the human towards the machine yeah I think law has a very important role to play here I think I share Jeanette sort of general sense that self-regulation is probably not going to be a sufficient solution and then there have to be legal interventions and the law is both instrumental in that it will undoubtedly by deciding what to permit and what to prohibit shape the behavior of governments private companies in terms of how they design algorithms how they implement them on the ground the law I think also has important expressive principles maybe related to ethics where if the law allows for something then citizens members of society will view something as maybe more socially acceptable so I think the law here has a big role to play coming back to the criminal justice system though I think there are many ways in which the current law certainly in the United States falls short a lot of for a lot of the new challenges that might come with algorithms so to give you some examples many people are troubled by the use of disparities that can emerge when you use an algorithm to make decisions and that could be because of the data or the structure of the algorithm now it turns out there's probably pretty limited legal remedies for addressing those disparities under current US law a finding of discrimination under the Equal Protection Clause of the US Constitution would require a showing of discriminatory intent or purpose and that's hard because when an algorithmic designer chooses to use a variable or certain types of data there's probably often no discriminatory intent or purpose and yet because so many variables and proxies for things were troubled by there's often maybe no direct legal remedy and so this traditional requirement we've had in the US Constitution and case law of requiring intent and motive is often ill suited to addressing the new types of problems that the algorithms can introduce moreover it's actually been the case in the US that many have interpreted the case law on discrimination is requiring or prohibiting the use of characteristics like race or ethnicity you cannot use them in any way shape or form but the reality is that because of the complex statistical relationships underlying many variables I other computer science scientists economists have written and shown that those proxy effects that we may be worried about are often created because of the prohibition on the use of those characteristics and that once you take statistics into account you may actually want to use protected characteristics in certain forms to actually remedy those disparities and so it's actually this problem right now where I think the law is pushing companies governments to develop versions of algorithms that may actually be counterproductive to our larger societal goal of equality and opportunity and I think you know to the earlier point about human decision making the law often does not consider counter factual in a very easy way it often seems to require perfection for algorithms explainability but as you point out what is more black box than what is in a judge's mind perhaps the judge's mind is more of a black box than a neural network or other forms of machine learning and so I worry that the law by sometimes requiring perfection and not considering the counterfactual will often chill and deter what may be innovative and good uses of algorithmic decision making so also the relationship between technology and law and law and ethics is very complicated and bi-directional with unintended consequences included still if I may some of you have identified put this on the same on a part in a way that does not convince me of any decision taken by an algorithm being plausible you know the fact that we don't always understand algorithms and software that is guided and steered by algorithms that is me very concerned we say and the situation in america is slightly different from the situation in germany's judge passes a sentence or judgment you have he or she has to justify that decision not every ruling or decision has to be accepted people may have a different opinion but as a rule as far as the tradition in germany is concerned you do have a very extensive duty to justify you and sentence what you're ruling and that is what lacks when you talk about algorithms that is one of the questions that we need to discuss I believe is it conceivable at all that algorithms that control of algorithms can become or be made more transparent of course not towards each and every individual person but perhaps with regard to those who consider themselves the representative of the government you know the body in question responsible for protecting the rights and the freedoms of the individual can you hear me and a second remark that I'd like to make against the backdrop of what has just been said it is good that we have a debate up here on the rostrum so to speak about the ethical principles for digital transformation but what struck me and I've tried to refer to that in my introductory remarks when I bring together a group of experts in my office I have experts briefing me about the technological potential of AI you know I have an idea after these talks of what is doable what is conceivable but when I have talks about the ethical limits of digitization brings together a wholly different group of people because as a rule I do not meet IT experts or engineers but I meet social scientists philosophers political scientists which is an identification to some extent of something that keeps me deeply troubled and that is that we have a debate that that debate takes place within closed circles close communities that is to say we have a debate about the ethical limits of a digital aid and that we have to bear in mind with limits that we should not surpass or overstep we have a similar debate about the functioning of democracy but it is not carried beyond the respective community please tell me if I'm wrong I'm happy to hear you point that out to me but as I see it and I as I would wish to see the two communities that I've been mentioning the tech community on the one hand and the more philosophical community bringing together social scientists and philosophers we don't have a discussion that brings both groups together we haven't been able to link up that discussion is that impression that I have correct would you agree with that is it limited to Germany or would you say that this is also transferable to the debate in the United States change that right over the MIT can you can you share your thoughts and then I would like to open up for a number of questions so be ready with your questions so the Swartzman the College of Computing which was announced last year is intended to get it just this issue that much of the technologies that are obviously being created in my team uh we recognize that there has to be a bridge between technology and the humanities or social sciences in an intentional deliberate way and part of the why the school was established is tons of students are interested in computing they're doing it they're coming in wanting to major in computer science but many of them don't want to only be computer scientists they want to apply that knowledge to something else but they want it to be guided by some domain knowledge outside of computer science and so the goal of the college is to eventually we beginning to see joint blended degrees between computer science and economics computer science and urban studies computer science and music not all students are doing this but the interest is great and it's intended to allow for this connection in a more organic way from the beginning such that students will have the type of skills so that we won't be talking about disparate communities but that students will have enough of an openness at least an exposure to understanding that this isn't this is what it means to be a computer scientist and conversely for my own discipline of political scientists this is what this is what it means to be a political scientist is to know something about this so all of us are going to have to learn more and be open to learning more um if if we're if we're going to successfully deal deal with this uh issue I think one other thing I like to say about it is is you know we're starting to have these conversations on campus they are not easy conversations to have as much as we try to be collaborative we've have to we've really had to work uh we we may be using the same terms but we speak a different language and it requires patience to do this so part of what we're doing is also learning some other principles of generosity and patience as we deal with one another because if we want to solve this this problem deal with technology in a way that we all want to see um then that's what it's going to require some other human qualities that we have to bring to bear for this to happen and um things especially and conclusions really important for us for our our undergraduates since many of them will be going into leadership positions they know the technologies we need them working on the congressional staff if you all saw the hearings with with Mark Zuckerberg and the congress people didn't know what Google was right or they didn't know the difference between Samsung and Apple me they didn't know anything about the technologies if they don't understand how to open the phones how can you imagine that they can be responsible and entrusted to do the kinds of things that you all are describing some of what we also need are our uh you know our students to be able to play those kind of roles um um precisely because but we won't we don't want them to do it only knowing the technology they will also have to understand economics also i have to understand political science and such so that is the the task of the college and um we're just getting started so stay tuned it's exciting it's exciting thank you okay let's open up for a few questions uh Becca our microner is ready and fast on our legs who has a question and um please end with a question mark that will be good me um first thank you so much for all the interesting insights you shared today um my question is regarding the fact that a lot of you mentioned today that there's sort of an urgency to craft tech specific ethics regulations as soon as possible so in a way this is really a moral discussion with a deadline when would you say is this deadline when do we have to sort of formalize our thoughts and put it into law what was the question for me you now it is yes sorry but no of course i think the deadline is uh it's not it's not powerhead but it's just right now but um there are already a lot of ethic guidelines being written right now so we have on national levels i am now different one in germany then there are international levels like the for example the high level export group um that got the task by the european commission and they wrote something and then we have already part of this ethic guidelines for example extracted and the g20 group assigned it and so there are already guidelines but but i think the guidelines are only the first step and then of course they are first step um to transform them into law as we heard already with the data just gone for autumn and um so yeah it's it's it's right now and and we should um go further ahead but it's still it's already something happening i think of course yeah maybe i can add a legal perspective to that because it's one of the problems of the law is that the function of law is to have stability and what we need here is some flexibility as well and so what we are struggling with as legal scholars is to find ways to make law more flexible to have constant evaluation to have sunset clauses and things like that so that we do not have to wait i think it was susan crawford so the marvelous sentence we have to regulate things that we don't understand and i think we are we can't wait until we have all all the lawmakers have understood what what's actually is we have to do do you have to act before but then we need different instruments especially when you take into account that that what you said that when you are talking with lawmakers even if they really try hard the problems are really high-tech only a couple of people really understands what's happening there and you you cannot even the best member of parliament cannot be an expert in this field so we need a constant evaluation and some mechanisms to deal with that that we every day even we as researchers every week i have a new understanding of how algorithms interact with society every meeting i have interaction with some software engineers and now okay it's a little bit different than i thought before and on this basis to create law that's i would say fundamentally new challenge but if i if i may add to what you've just said the situation is changing when you take a look at the german legal culture you know it always bases is based on the assumption that a law that is passed is until kingdom come it's for eternity you know when we now look at the area of internet law something interesting is happening that is quite difficult complicated when you look to the relationship between the one passing the law making the law and the public you know public comments about the network enforcement act in germany for example is an indication of that in some areas of legislation we have already reached a point where we can no longer provide an eternal guarantee for the legislation that is being passed we are taking one step hesitating the after the other carefully trying out to see how that intervention is going to affect the reality in the future this careful and cautious tentative approach in legislation that is taking place right now it's not something that is being greeted with enthusiasm and i do understand but there might be no alternative to that approach you know trying to time and again refer back from the instruments to the technology and vice versa and to amend the things when necessary such an amazing group of people is you know we could discuss 20 minutes just one question so you want to jump in quickly with comments janette i thought perhaps one way forward could be to pursue a more procedural approach to these problems for example think of ways to holding companies accountable to the kind of technology development they try to bring to the market in introducing auditing requirements for certain type of algorithms make it mandatory to only use in certain areas machine learning systems that are self-explanable that sort of explain in at least basic ways how they come to certain recommendations and predictions that seems to be a way forward rather than relying just on rules so maybe i can also jump in here so i think professor gingers mentioned that there are about 130 130 ethical principles that are being sort of presented by different companies and so on and so forth i think what we really we also need is sort of a kind of rationale a philosophical justice this is a plot for a philosopher you know a philosophical justification for some of these principles so for example they talk about you know a lot of these principles say things like we need explainability why do we need explainability i mean we we've heard some of the panelists asking this question and or sort of uh and and various other things and so here in that line i'm very sympathetic to what professor Hoffman's saying which is this idea of the human rights framework which sort of says that you know uh you know we need to look towards the goal what what are these algorithms for fundamentally they're about promoting human well-being right we want to make sure that we have a harmonious society one that sort of works towards all of us and so a human rights framework i think can really uh there's a move towards that goal and there's a rich tradition there's a rich literature on these philosophical justification justifications of the different rights and they sort of go beyond just discrimination right they sort of say they're positive rights their rights were uh you know where we just it's not just about making sure that you don't discriminate by making sure that your technologies also work to help people and so on and so forth and the other thing about human rights is that it's sort of it's an obligation on everybody so it's not just an obligation on the engineers it's not just an obligation on the company it's not just an obligation on the government it's an obligation on all of us we need to collectively make sure that we're working towards making sure that these technologies work well for everybody i promise to be very brief i know there are a lot of hands up this is such a fascinating question i think i wholeheartedly endorse the perspectives others have raised especially mr president that the laws must be adaptive they must be flexible because we are still learning how they work how the algorithms work and so we have to also study when a new regulation goes into effect what does that mean about the types of algorithms we're seeing that flourish after them what types of algorithms are now disappearing as a result to the point about explainability i think we all have this you know desire to understand what the algorithm is doing and so we often then might shift towards regulation or some principles that the algorithm be explainable the complication there is now there's emerging work coming from computer science and economics showing that when you force an algorithm to be explainable you generally will choose an algorithm that's simpler because it has to be easier to understand but a simpler algorithm as it turns out in some context actually can lead to both less efficient results and less equitable results which again raises a conundrum that i think no field alone can address but just reveals that there are inherent trade-offs every time we make a choice like explainability and we have to confront those trade-offs and decide how do we weigh competing values which are inevitably going to be at stake great so let's collect three questions and then we respond one question here and then maybe one from over here in this area yeah i'd like to add an observation um i think it's also observable in the on the podium that we are missing economists and we are missing behavioral scientists and it seems to me that these two components are crucial in understanding the impact that ai has had and will have on our society and each of us why do i say this because ai has enormous economic potency in this country it's the majority of the productivity of this country comes from ai and why is it that facebook and google and other companies have been doing you know undaunted what they have been doing is because exactly of that and so that is one reality that we have to face and this reality is deeply immersed in research as well where is most of our funding going it is going to computer science to computer engineering and then we have some alibi excuse the term addition of social sciences and if we are lucky behavioral sciences there's no we have to stop there's no it's just it's very well we've heard a lot from the panel so it's it's uh so i'm sorry but you know there's no level playing field between the behavioral sciences and all the psychological dynamics that are opened up by ai and computer scientists and computer engineering unless we change these funding structures we've heard a lot about the necessity for other regulations for companies but these funding structures have enormous consequences president stein meyer was asking is there any example about you know at the beginning of such an enterprise to bring disciplines together i would say yes and actually at the ruhr university bochum there's one competence cluster that focuses on cybersecurity which tries to give level headed equal importance to social behavioral science on the one economics and computer science and computer engineering so i just hope that in the future such discussions move beyond very important contributions from philosophers ethicists and and and lawyers to also you know have a brighter view collect yeah please go ahead please i'll keep myself short as we are at the german-american conference right now i just wanted to ask how does transatlantic relationship help us in solving all these challenges what is needed for an effective transatlantic relationship especially the german-american one to solve all these challenges together it's like a world and not as like separate states okay maybe one or two more yes please i have a question i appreciate do you think talking about social media that the time will come that there will be a reliable algorithm to identify hate speech so i'm one of those computer scientists writing those messy algorithms and i know that there are people in my field to think very critical about this and i know there's a lot of discussion so i can kind of convince you there are people behind the curtain talking about the things how can we reach out to like other people who are thinking about this so my question is short thank you all this was delightful um i would like to understand are we being human race empowered by technology or are we powering technology by humans we have 12 minutes left and i'm swiss as i said so i want to end on time so what i would suggest is that we actually do a closing round and pick the question that you would like to address but put it into the context also of your work and what we've discussed here uh so we have the question of transatlantic relationships we have the question around social social media and the role of technology in creating a safer environment using the example of hate speech and we have this ultimate question is technology empowering people are people here somehow to empower a technology so these are a few of of the themes um perhaps we start with Eva um yeah maybe to the you have two last questions of of course i think that um the technology should empower people but for that human machine interaction it is important that we understand each other as we already talked about and maybe i just hint to two aspect that um philosophy can contribute here and the one is it's not about only about explainability that you said and it's not only what is also important about the ethic or moral justification of why it is so important to explain but also to see that in morality for example it's about reason giving so um the whole moral validity the validity of moral norms depends i think on the fact that they exist between beings that can give reasons and understand reasons and so one question would be um do we want algorithm as judges who cannot give reasons in an emphatic sense for example and another topic would be um the topic of trust because the ethic guidelines often highlight just worthy AI as a claim and i would also be skeptical if this is a best um aim because um trustworthiness presupposes also being a moral subject because you um because trust mean to some to believe that someone will hold to his or her commitment to do something and this is also something that is only possible for moral subjects so AI systems cannot be trustworthy agents or subjects thank you thank you yeah so i'll take the question on hate speech uh so i mean there are some attempts to uh you know using machine learning algorithms to sort of detect things like fake news and things like that but i should want to give you a very grim picture this is like election 2.0 since we're sort of coming up to another election cycle so there are some evidence there's something called defake which is uh you know being able to produce all these videos that kind of just look like you know they can kind of superimpose your photo or you onto another video and then they can get you to sort of talk and do various things and what people are finding is that um so there's the theory that you tend to vote for people who look like you okay and so now the um the def they're creating defake videos where they kind of superimpose a candidate's picture onto your picture uh your picture onto a candidate's picture so it looks like you and now uh supposedly that's going to influence your voting behavior because you're more likely the vote for people who look like you and so that's going to be very worrying uh in the future and so that's something and then the question is uh we're going to get to a point where it's going to be very hard for human eyes to be able to detect those differences and that's going to be very worrying shanae i'd also like to pick up uh the question on hate speech um what i uh find really good about this question is because we have so many examples that show how deeply ambiguous we are about such um such wording um uh facebook once told me the example of the term bitch bitch can be really dismissive when you call a woman a bitch but nowadays in sub-circles bitch can also be appreciative women might refer to each other as bitches how is how is facebook supposed to regulate i mean how is facebook regulates wordings that have so different meanings and that i think shows also the limit the limit of uh uh technical filtering of language language is changing all the time and it differs across cultures also very much so there are really limits another point i if i may um the question of empowering uh was disempowering i really like this question because it implicitly refers to autonomy of human beings i think it's a mistake to think autonomy needs to be defended against technology technology in many ways enhances our autonomy think of flying around think of your watch i mean we coordinate as societies through these technologies and at the same time they are disciplining ourselves so it's not an either or and technologies and human beings are not opposites it's the matter of how we structure and shape the relationship between the two denobles uh is it okay if we go last with you mr president yeah i'll just take the question of the computer scientists who said these kind of conversations are also happening among computer scientists uh but uh but we need to get more better connections with others who are thinking critically about it um i think that um obviously education plays a hugely important role in this right that is early on getting students of different disciplines to work together and and to learn together in a way that addresses these questions um exactly the challenge of all knowledge is making sure that it doesn't stay siloed and that we work in a truly collaborative way and uh it seems to me that's the challenge for the 21st century yeah i think i pick the the comment if i may on the uh on the funding and and interdisciplinary research i think we talk a lot about interdisciplinary research and we need it to solve problems but the academic system is not really designed to cater that need um we still have problems with that and i constantly get phone calls from colleagues that want to apply for for a project funding next week and they said oh we have just seen we need some ethics in it and we need a lawyer or something like that would you be available and uh normally i say now no because it has to be part of the project question and not just an icing on the cake that has already been baked it's it's um that makes no sense and so i think we have issues here in the academic system and maybe 30 seconds on the transatlantic issue i think it's it's really helpful and and good for this questions that there are really stable research relationships between our american colleagues and and the research in germany it's it's uh really great and that survives even if there is a political winter or autumn uh that we have this relationship and to solve these kind of problems are things that's extremely helpful yeah i'll just follow up also on the the research question to the excellent question over here i failed to mention i am actually an economist as well as a lawyer and i would welcome uh many more economists studying this area and i hope that the funding structures as well as the incentives do promote that greater collaboration i think the computer science community is doing amazing work it's often siloed from what the economics community is thinking about what the legal community is thinking about and so i think initiatives like what d nobles is doing is probably a really great way of bringing people together and to education there is such a need for infusing this type of learning in legal systems um and i don't think that us law schools at least have really been at the forefront of this in fact many of the decisions you read from state supreme court judges who are ruling on the use of algorithms and making important case law have explicit acknowledgments i'm paraphrasing here but i'm not paraphrasing so far of the judges in this case were limited in their decision making because they didn't understand how the algorithm worked well that's a really big problem and so we need to train the lawyers who will be deciding these cases working on behalf of clients who are both creators of algorithms and individuals adversely affected by algorithms to understand how algorithms work mr president you have the final way yeah thank you thank you indeed i'm not attempting even attempting to respond to all the questions that were put to us but let me do begin by the following remark the debate that we have just been witnessing with the participation of the audience would undoubtedly be easier in the future if we were to keep it clear for many misunderstandings if i may come back to the question that you put at the beginning why is there no economist amongst the people here you know the economic potential of it and artificial intelligence is being seen sufficiently i believe so if you take a look at the expert together here you will undoubtedly find confirm that everyone is aware of the economic potential everyone is aware of the technological potential everyone is aware of the potential that exists when it comes to fighting poverty fighting disease fighting the impact of climate change if we want to be successful in those areas we need experts at the top level and we in germany intend to participate in that development just as much as you do but that is a kind of advance remark i want to be very clear what i've said doesn't mean that we end up in an age of unbridled regulation crazy approach towards crazy about regulation when you look at the field of tension between new technologies on the one hand and what is the constituent element of our societies and that is democratic decision-making processes in western societies there is a lot of tension shouldn't we make that also topic of the discussion every once in a while and that is why i suggest it to make that the topic of our discussion today so no one should assume or be afraid that this inherently entails a secret wish to in some way influence the development or to slow down the developments in the field of AI and technologies of digitization but that's not my intention really but there is this field of tension i mentioned and we have to focus on it we have to deal with it and this is equally true for all those who participate in the process of technological development of these means of communication this should not be left to philosophers or individual groups that has to be viewed as a topic for all of us and if we pursue such an approach we will i believe reach a point and that has become obvious here as a consequence of the discussion where we don't leave it to appealing to the morals and each of every individual and his or her responsibility but we need to have a debate across borders whether there should be limits to technological process that we should not pass because this at the end of the day is what it is all about it's difficulty enough when you look at germany in the united states of america but it would become even more difficult when you think about those countries that have a completely different social system approach but we need to have that debate we need to have it with a country like china and in saying that i'm not cherishing any illusion about us having in 10 or 5 years kind of you and charter on artificial intelligence we won't get that but nevertheless we should engage in that kind of a debate just as much as we have a debate with china although we have different views on the issues of bioethics and genetic engineering we are not in agreement on these issues but nevertheless we have succeeded in defining some limits or ceilings or restrictions thus i am not discouraged you know in any way when i look to the possibility of such a debate although it's going to be a complicated one but you know this was mentioned we need is a transatlantic debate on this subject matter too apart from all the topics of the day the conflicts of the day and i don't want to downplay their importance but we have to tackle the question of the importance of the freedom of the individual of the drama cateculture in the states of the western world and we can't ourselves amongst those just as well as the united states of america and we need to have that debate amongst the western world first and foremost this is why i would wish to ask to have the opportunity time again as i have been trying to seek it to my visit here to engage in discussions and debates that do not solely focus on the present conflicts trade conflicts being just one case in point but to have a transatlantic dialogue about the issues that are really at the essence of what links us and affects us in the years to come and will be affecting us in future too i very much look forward to my next visit to boston and to have it and thank you for having come here