 Hello everybody. Good evening. I have to confess it makes me a bit nervous. I haven't been on the stage in a public space like that in such a long time and I'm not used to not seeing anybody anymore. On behalf of the Humboldt Institute for Internet and Society and the Federal Agency for Civic Education, I welcome you all to this evening's lecture with our highly appreciated guest Judith Simon and our host Tobi Müller. This is a special evening for us. The last time we could physically convene for this lecture series was, believe it or not, 18 months ago in February 2000. Since this is the first time we can hold the lecture as an on-site event after one and a half years, I'm taking the opportunity to remind you of what our lecture series making sense of the digital society is about. First of all, it addresses the large-scale questions of digitalization such as power, capitalism, democracy, surveillance and their transformation. Second, it presents leading mostly European thinkers who provide their specific views on relevant questions of our time. Among them were Manuel Castells, Elena Esposito, Marion Foucault, Nick Cauldry, Shoshana Zuboff, Joseph van Dijk and recently Jan Werner Müller. All these thinkers tend to counterbalance popular often US American-centric perspectives which have come to dominate the academic discourse on the digital society. Third, the lecture series is meant to be held in an accessible format able to reaching beyond academia and to include a conversation and Q&A. The series began in late 2017 and we are glad that ever since we've been able to welcome high-profile speakers and their inspiring ideas and tonight of course is no exception. Apart from one, all the previous talks are archived and can be found on the website of the High Gay in case you're interested to watch or re-watch. Tonight's lecture is as corona-proof as possible. Despite that, sadly it might be the only one in person this year. So please make the most of the evening and join us for food and drinks after the talk. There might be another lecture in November this year but as we've all learned planning can be very volatile business these days so this is not 100% sure yet. Before I hand over to Tobi who will properly introduce you did a short note on our own behalf. Both the Bundeszentrale and the High Gay have worked hard to make the stressful life of the modern swing voter easier by offering specific tools to facilitate the choice in the upcoming federal and state elections in Germany. The Bundeszentrale provides the famous Valomart which does not need further introduction since it seems to set new records in popularity with each election. The High Gay offers a val compass or electoral compass 2021 which gives you detailed information on the party's electoral programs regarding digital policy issues. Have a look at the High Gay's website to find out more. Tonight's lecture will address a highly relevant topic the ethics of AI and big data presented to us by one of the leading European researchers in this field. My thanks go to our partners and the organizers who've made this even possible. And now without further ado please welcome Tobi Müller who will introduce you did and share the evening. Hello everybody good evening I guess I'm the second supporting act of this music this really feels like a pop music venue here so if this was a pop music gig tonight I would have to tell you to move closer to the stage but you know we all cannot do that tonight. Thank you so much Jeanette Hofmann for your introduction thanks so much for having me here as a moderator in the fourth year. I think it is already of our series making sense of the digital society. Thank you Homeboldt Institute for Internet and Society and the Bundeszentrale für politische Bildung, the Federal Agency for Civic Education. Tonight here at Spindler on Klatt it used to be a club actually and now it's something else right at the shore of the River Spree for all of us who are here live tonight in presence actually this is quite a feat very glad to be here. I have actually no idea what nudged you you now your our present audience and the audience at home with Alexa TV and on the respective streaming websites have no idea what nudged you to attend this event. It is probably safe to say though that it wasn't an elaborate recommender system that suggested the series to you or this speaker but could it be that I am wrong to think you all came because of old-fashioned email lists or even more old-fashioned because you had been here before live in one of in one of the venues we've talked before in the last four years before the pandemic started and that you have found this series just as inspiring as I still do even in our fourth season but maybe you actually have been targeted by social media or by search engine ads or maybe the YouTube channel of this event has gathered so many views in some cases I can actually confirm this without too much bragging that this series is more likely to turn up high and the results of search engines until now I've talked about various different forms of artificial intelligence and big data recommender systems you know that were popularized with Amazon more than 25 years ago Netflix rise to this China popularity of today is largely due to their recommended systems actually not so much because of their films I think search algorithms at targeting and probably much more this is just the tip of the iceberg of what we talk about when we talk about AI artificial intelligence and big data this is exactly where tonight's guest comes in she will differentiate between different types of AI and big data and then go on to ask about the ethics the ethics behind these different systems especially the built-in bias the ethical flaws or other types of AI at the ethics of AI regulation or improvement we'll also talk about that I will introduce you to her in a bit more detail in a minute so just a quick word to the structure of the evening after this second supporting act as I already told you there's going to be the talk of our renowned guest of course for about roughly 40 minutes I think we'll have a one-to-one conversation here on stage for maybe about 20 minutes and then it's your turn here at the live venue and at home we have a participatory tool called Slido you can also ask questions on Twitter I think so we're gonna have a mix of live questions here in the audience and you know through digital tools and I think pretty much in about 90 minutes that's going to be the end of our session of tonight and it's gonna be drinks at the river shore so back to our guests from Hamburg and it's actually quite of a miracle she's with us here today you know about the strike of the Deutsche Bahn and all other kinds of transportations are kind of hard to get to with these days of one time of the day we even thought about waterways you know that actually the elbow connects to the hovel and this would have been one possibility to get her here but she managed so she came here from Hamburg where she's a professor of ethics and information technology at the University of Hamburg she's also a member of the German Ethics Council Deutsche Ethik Radt since 2018 and she was part of the Data Ethics Commission of the German federal government the Daten Ethik Commission who published the report in late 2019 she single-handedly edited the Raublech Handbook of Trust and Philosophy that was published about a year and a half ago her academic background lies in psychology then philosophy which he studied in Vienna in Vienna among other places and at an early stage in her career she also tested software for its usability before continuously working on the intersection of philosophy tech and science at various University from Paris to Stanford and Barcelona the longest trips nowadays or so it seems as I told you already are those from one German city to another from Hamburg to Berlin so glad she was able to make it she's with us tonight please welcome Judith Siemon. I'm a bit smaller so I guess I have to move the the microphones I'm very happy to be here as well I can't see you at all I have to say right now and it's a bit strange after one and a half years of just doing everything by a zoom to be in front of an audience even if you don't see them so without further ado let me just I'll tell you what I'm going to talk about today the title of my talk as was announced is the ethics of big data and AI and I think I need to open a water bottle just give me a second the outline I'll first briefly talk about what ethics has to do with AI to begin with and why should we even think about ethics in the context of artificial intelligence I'll then very briefly talk about how I view AI and big data and then point you to some ethical challenges for and of artificial intelligence before I end with some conclusions in recent years quite a number of ethical about policy papers were published that often made reference to ethics the data ethics commissions report was mentioned before but also the report by the high level expert group on AI on artificial intelligence by the European Commission also labeled their report ethics guidelines for trustworthy AI so there seems to be an interest in talking about ethics when you talk about artificial intelligence why is that let me first point you to you know what what is ethics all about to get you started well first of all you know ethics asks very fundamental questions about what is good and what is bad put differently what is right and what is wrong and if you look more at the agent supposed to do something you would ask what can we do and what should we do or what must we do or what can we not to or must we not to and for what reasons right what are the reasons for us for considering something good or bad if you if you look from this angle and artificial intelligence it basically the first question is what is good and bad AI right or put differently what can we do with AI and for what reasons or what should we not do with AI and what for what reasons these are the base basic ethical questions that you ask in that you may want to ask in relation to to AI and if you strive for something labeled good artificial intelligence then and that's already a premise that I'm making then to my mind scientifically and technologically good AI is necessary but not sufficient for ethically good AI and why that's the case I'll get back to that later on because and also at some instances there may be ethical reasons for not using AI even if it's near perfect right there may be reasons for not using AI for instance in warfare even if this was better at discerning let's say soldiers from citizens to begin with right so there may be other reasons except from it being tight as scientifically or technologically good so I'm teaching ethics and information technologies at the University of Hamburg and if when I'm talking also to my students I try to descent disentangle three different roles of ethics for information technologies and these are the ethics of the profession the ethics of use and the ethics of design let me walk you through each of them the ethics of profession is the first angle of looking at the intersection of computer science broadly conceived and ethics and what it usually does it looks at the designers and developers of software and asks how they should behave ethically in designing and developing software and you can find lots of codes of conduct this is the one by the German Informatics Society which gives you indication of how you're supposed to behave as an you know good member of this of your career of your profession if you if you want to strive to these ethical guidelines and this is of course the oldest way you may compare it with the ethics in the medical field where they're also guidelines on how medical the medical professions are supposed to behave the second angle of looking at ethics in relation to information technology is the ethics of use and here you don't look to the designers and developers but to the users and to the usage of information technologies and I just pointed some questions there to give you an idea about what type of questions you may want to ask if you think about the ethics in the use of information technologies you may ask should individuals be allowed to post racist comments online or what should I do about this or you may ask for what purpose can we use customer data or how should governments protect citizen data and how to weigh different values and interests and what this already points you to is that users can come in very different forms they can be individuals they can be companies but they can also be governments and they are all what what makes them specific is that they have not been developing technology but they are using it for specific purposes and here ethical issues arise as well the third and most recent way of thinking about ethics in relation to information technology is through the ethics of design and here you don't look into onto the designers or the users but you look into the technology itself and basically it boils down to two different tasks one is the ethical analysis of existing technologies and the second is the ethical design of novel technologies so underlying this idea that you I mean you must first ask how could you even analyze technologies as an artifact as a tool from an ethical angle and the underlying idea is that computer ethics as the domain from which you know AI ethics and all this may come from should not just study ethical issues in the use of computer technology but also the technology itself and what is underlying is of course the idea that technology is not neutral but that computer software and systems and software are not morally neutral and it is possible to identify tendencies in them to promote or demote particular moral norms and values and if that's the case then you can analyze technologies for how they are affecting values and norms how they may be either strengthening for instance privacy or undermining them right the second task if that's the case if technology is not neutral you may as well strive for designing technology in a way that conforms to society held values and that has been a topic in the field called values in design or value sensitive design already since the 1990s and here I have a quote from a game designer Mary Flanagan she writes if an ideal world is one in which technologies promote not only instrumental values such as functional efficiency safety reliability and ease of use but also substantive social moral and political values to which societies and their people subscribe then those who design systems have responsibility to take this letter values as well as the former into consideration as they work i'm not going to go into details but just to give you a pointer of course you know the question is how do you get this done how do you get to think about so the idea is basically in values in design and value sensitive design to account for values such as privacy transparency fairness when designing and developing technologies and of course ideally you have you collaborate between social scientists philosophers and computer scientists to do that and i'm not going to we can maybe talk about this later but that's probably what we're that's partly what we're also doing in Hamburg this is very different from what Virginia Dignam has called ethics by design which is about the technical integration of ethical reasoning capabilities into autonomous systems right this is what i'm not talking about what i'm talking about right now is rather how can we make sure that certain values but also rights that we have are still valid and are counted for when we're delegating certain decisions to automated decision-making systems so to summarize this these are the three different viewpoints of ethics in IT you either may look at those who are developing systems those who are using them in the different forms in which they come or you may want to look into the technology itself let me now move to artificial intelligence and big data because it's a bit peculiar and it is a bit of a challenge also to how values in design can be conceived so this is you know as a philosopher you know if i need a reference i always i'm always looking to the standard encyclopedia of philosophy because it gives you quite concise short short notes and this is a definition of artificial intelligence which is characterized as the subfield of computer science devoted to developing programs that enable computers to display behavior that can broadly be characterized as intelligent most research in ai is devoted to fairly narrow applications such as planning or speech-to-speech translation in limited well-defined task domains but substantial interest remains in the long-range goal of building generally intelligent autonomous agents even if the goal of fully human-like intelligence is elusive in a seldom pursued explicitly and as such and this is also what i'm least interested in i'm much more interested in the very mundane everyday usage of what's now called artificial intelligence if you look into the history of artificial intelligence they have always been there have been lots of summers and winters lots of periods when there was a lot of hype and interest in artificial intelligence and then again phases in which both the interest and the funding was decreasing and right now we're living again in a current summer of ai what you may see from you know how funding is spent on artificial intelligence all the hope that surrounds artificial intelligence and the underlying reason why there has been another progress or shift in what is now conceived as ai is the existence of quite massive amounts of data that can be used for statistical analysis so if i were to summarize what is currently the current debate about artificial intelligence very often focuses on machine learning as a specific type of statistical analysis so the core of many of these things that are now being captured under the heading of artificial intelligence is basically statistical analysis of big data including machine learning for the purpose of pattern recognition classification prediction and decision support in such as automated decision support systems i just pointed you some some logos in here just to give an idea of course you know machine learning in ai is in speech in speech recognition and recommender systems in all sorts of search algorithms also of course in facial recognition if this is used in cameras and i'm going to get to this case of you know this this is a quite famous or infamous software called camp compass which is used to predict the likelihood of somebody reoffending i'm going to get back to that later on but what i'm trying to get at in all these cases many of these systems are based on data and this data is being analyzed in order to make to to classify and to support decision making processes what you can already see from just these you know very few examples which i pointed to through these logos and pictures is that there's a vast diversity complexity and dynamics of both the technologies but also the very different contexts in which these technologies are being developed or applied and as a consequence of course the ethical issues are also quite diverse and complex and changing over time what i'm therefore advocating both in trying to understand ethical issues related to air and big data but also when it comes to the governance in these domains is an ecosystems perspective on data and ai because very often we're dealing with data flows between different actors and there are very different junctions and points we need to look at what either ethical issues emerge or where there may be soft spots for governance so let me now turn to some specific ethical challenges off and for ai the first issue that of course comes to people's mind when they think about ai is of course especially when we're talking about machine learning based on large amount of data are challenges to privacy and of course this is a this is a graphic from Wolfi Christel from his publications if you if you look at the data brokerage system that is underlying the online marketing that basically makes us very transparent so all the data that is being used now for classification for recommendation for search engines for all these especially for recommendations is based on very fine-grained profiling of what we are doing online for of our whereabouts or we're clicking what we're liking etc and on this basically you do lots of predictions on very sensitive topics I'm not going to go into the details and that on the top right is a graphic from a publication from Kosinski which was also quite infamous in which he was looking how well he can predict ethnicity or gender just from very few facebook clicks basically or what you liked on facebook was highly predictive of your gender or ethnicity so what you're doing online is leaving lots of traces all this is fed into these database systems and it's a challenge to privacy of course you know it doesn't shouldn't come as a surprise that facial recognition is yet another technology that both relies on machine learning in ai and is also highly invasive in terms of your privacy while privacy is the first target the first value that often comes to people mind as being infringed by these database technologies the second of course has to do with the domain of bias and fairness let's stick to facial recognition for a moment so if it works perfectly it means that people know your whereabouts if you imagine a world where you have cameras all over and facial recognition technologies employed it would basically mean you're quite transparent to the state in terms of where you're whereabouts the problem is however that they are not working well that may be that may be good but it may also be bad right what you can see on the left-hand side this is this is from a publication from the MIT lab from timnit gebu and joibu alumini what you can see is from the major the major software packages that do facial recognition from microsoft phase and ibm what you can see is they are close to perfect when it comes to white male faces but the accuracy goes goes massively down if you for for dark-skinned female faces so there there is a certain bias in the accuracy in terms of some people get more easily recognized and for others the the error ratio is just much much higher so you may think well you know that at least gives me a bit more privacy possibly but the problem is of course if based upon these recognition action is taken such as the belina suit coils but also if this is used for criminal investigations being re-identified being falsely identified may really pause lots of issues for people who are already for groups of persons who have already been disadvantaged in other fields let's stick to bias and fairness for a moment because of course these issues with bias in automated decision-making systems are not only the case in the facial recognition software and of course they may in principle be solved if because the the source is very often the training data but not always similar thing happened in in an algorithm that is distributing people to different types of care and therapy in the united states and here's a quote from this article the study published in science on the 24th of october last year concluded that the algorithm was less likely to refer black people than white people who were equally sick to programs that aim to improve care for patients with complex medical needs hospitals and insurers use the algorithm and others like it to help manage care for about 200 million people in the u.s. each year so it's massive right the impact of this software affects 200 million people it's not just a minor thing right it's a massive thing this what is also important what i found from this article is that they write this type of study is rare because researchers often cannot gain access to proprietary algorithms and the re-arms of sensitive health data needed to fully test them there's some music coming up and that is of course the case that i was describing just prior this is the adm system called compass which is used in the in the core system in the u.s. to to give a risk score to people indicating the likelihood of people who have already offended to re-offend in the future and it was shown by the by the article propublica that this software is highly biased against afro-americans meaning that even if the average accuracy is similar for white americans and afro-american americans that the direction of error is the opposite so as a white american you have a much higher likelihood of being classified as is not going to re-offend despite you re-offending and for afro-americans it's exactly the other way around so you're much more likely to be classified you're going to re-offend with a higher risk than is merited the problem with this one and this has steered quite a debate is that of course this is also proprietary software so it was only through basically an analysis of the outputs that you could infer the internal workings of this software so what we can see here is first of all a justice problem societal stereotypes and prejudices but also existing inequalities and injustice are frequently inscribed into technologies intention of course you know the intentional discrimination is possible but mostly this is unintentional through either the training data or different methodological choices when you're designing adm systems you have to choose different types of data and they may be more or less adequate for instance on white skin phases or on darker skin phases as was the case for facial recognition or you may have certain choices for target variables which are also affecting different groups of people differently so especially data-based automated decision-making systems really run the risk of cementing the status quo if historical data on previous practice are used to predict the future right if you are a company with historically a discriminate against women because you give them less promotions right this will be in your data and if you use the same data to make prediction about future promotions you will just repeat the pattern moreover this issue can often not be assessed and addressed because of a dual transparency problem this dual transparency problem basically comprises of two issues the first is what I call functional opacity the lack of access to proprietary algorithmic systems because very often and of course there are reasons partly for that data and algorithms are considered property and they are important for some competitive advantage and what we've also been already been investigating for instance in the data ethics commission is the question what are the possibilities and limitations of different types of assessments and audits for algorithmic systems some may be x until before runtime some x post and for some you may need real-time assessment the second is epistemic opacity and that relates to the limited understanding of complex systems and they may be based on machine learning but need not be what is important here so this refers to the problem of understanding why a system decided in a particular instance in a particular way if the system is very complicated and very often in machine learning this even for the experts it's not fully comprehensible how the system went about in classifying in in making its predictions the problem is also that this transparency is user-relative and it's task-relative if I want to know why I was denied a credit I don't want to know how the system works in general I just want to know what is what should I have done different in order to obtain a credit this is what you call counterfactual explanation you want to know what I should have done different in order to get a credit this is very different from somebody auditing the system trying to make sure that women and men are treated equal by the system so we basically have at least a dual task for for ethical of fairness for ethical AI and this is this is framed very often under fairness accountability and transparency and as a result to this compass scandal there has been quite a number of publications and also a community within machine learning which has been organizing for several years now the conference on fairness accountability and transparency addressing these issues with from within computer science but let me point to some of the challenges that that go about addressing these issues which are just raised within computer science what you need to distinguish first is a distinction between discrimination aware data mining and fair machine learning first of all if you want to make sure that the system is not systematically discriminating against certain user groups be they divided by gender or by a skin color or whatever there are different methods for detecting measuring and also preventing or minimizing discrimination but the problem is you can't satisfy all of them at once right you won't have a system that does not discriminate against any of the user groups the problem is also if you want to turn this positively not only trying to avoid discriminating but trying to make things fair right because you may want to counter for injustice that has happened before it becomes even more trickier because there are different accounts and also mathematical measures of fairness and they require choice and justification so which measure of fairness is most appropriate in a given context just think about the elections in the case of elections everyone has one vote right this is our conception of fairness in that particular instance when it comes to taxes this is a very different story so depending on the sector we have very different and also contested notions of justice so if you want to inscribe those you need to decide which is the most appropriate which variables are legitimate grounds for differential treatment why should you be allowed to treat people differently right you must have reasons for that and providing these reasons and arguing about these this is what ethics is partly about or put differently should fairness consist of ensuring everyone has equal probability of it obtaining some benefit or should we instead aim to minimize the harms to the least advantage and if you just think about the corona one and a half years we had like all of these debates when of course not only debating in regards to AI but also in other very daily mundane facts so what i'm trying to get at is the focus on discrimination prevention in machine learning is necessary but it's not sufficient for just or ethical AI the focus also on methods within machine learning for instance on data preparation model learning and post processing are also necessary in order to make sure that systems are not discriminatory but again they are not sufficient for ethical AI political theory and also ethics may be sources for reflection on fairness and justice and they may guide appropriate methodological choices but these choices are always context dependent and contested so the task of deciding on specific fairness measures should not be placed on the shoulders of developers on their own because of their highly political character and depending on the impact this may require very broad public debate and participation of course you don't want to debate every ADM system that is that is basically spitting out what comes out of the you know you don't need to have this debate on any type of ADM systems but in particular those that are very invasive or are impacting a large number of people this is when you need public debate so let me end with my conclusions the challenges for good AI are manifold I just pointed you to a number of them and I could have gone on for quite some while they refer to privacy bias discrimination lack of transparency and lack of accountability and what I'm trying to also partly what I'm telling my students is that ethics is in the method but it also goes beyond it what I mean by that is ethics is ethics within computer science education but also in the practice of designing systems it's nothing that comes at the end when you think about the impact it's something that you need to think about in the process of designing systems because you need to think about it when you're choosing your data when you're choosing your methods when you're deciding on how to optimize and what to optimize for on the other hand this is not yet enough right because some of the problems can have their origin in the data on the methodological choices but others may just be reflecting injustice that we have in society and then there is no technological fix or what really is a societal issue right so attracting ethical challenges requires various instruments and stakeholders within technology design but also beyond it and what I also want to point you to of course almost at the very end is I've been looking a lot at you know what the problem is if I does not what it's supposed to do right when it's when it's faulty and when it makes mistakes but we should probably also worry a little bit about when AI is exactly doing what it's supposed to do because it's a lot about the metrification of your life and about giving everyone what he or she deserves and I'm not sure this is what I want in every realm of my public and private life lastly let me end on a note on the relation between ethics, politics and law what I'm trying to get at is what I find important is to acknowledge the relation between these three domains and the differences instead of either going towards ethics washing or ethics bashing so what do I mean by that there has been quite intense and rightly so public debate and outrage on on ethics washing and the idea that you know you put ethics at a label at something in order to avoid regulation and there is a tendency on that and of course you know this is a huge problem on the other hand this does not mean that ethics per se is useless right ethics is basically as I tried to say at the beginning reflecting on what is good and what is bad and what the reasons are for deciding what is right and what is wrong and that's something that we need to do every day in our daily life and we also need to do this in regards to technology this does neither replace politics nor law now does it make it superfluous right ethics is of course influencing law and decision making because we have different views and values and weighing values different differently we have different notions of justice there's also political debate on how we should weigh different values so I think we must acknowledge that there is a relation between ethics law and politics that they are all related to each other but of course must prevent that ethics is misused as something of just trying to get rid of hard regulation and with that I end and I hopefully stay in time thank you so much Judith Simon I think this was the one and only time where a speaker was actually shorter than predicted in the first place so thank you very much for this very concise and tight talk things are going to slow down a little bit now I guess due to my Swiss origin so please forgive me for that let me just take up a notion towards the end of your talk Judith is it fair to say that you said it's very hard for AI not to discriminate right and you also said this responsibility of discriminating or not discriminating should not be shouldered by developers alone so we need a broader social discourse on how to sort of distribute that responsibility or to improve anti-discriminatory coding or AI or systems whatever you want to call it now I think we're probably living in a time where we have social movements as strong as we hadn't have them for like 50 years when it comes to discrimination to identities being heard or not being heard to racism sexism and so forth where it's really forceful time when it comes to those subjects now if you tell them all those movements being so active globally now if you tell them it's very hard not to discriminate now how would you say we should structure this discourse you know in order not to shoulder all this responsibility on the developers what's to do there I think it really depends on the system at stake right if it's a very high stake system let's assume let's stick for a moment to this compass software right just to have a case and if you have a software where you can show that it's of very high impact right because it's it's making a prediction that has an influence on whether people go to jail or not whether they get probation whether they how high the bail is this is very high impact right and of course the the standards for showing how what you did to ensure that your system is not discriminating must be higher so the very least you know there should be some auditing that shows these are the means these are the measures that these are the steps that we have taken to test whether our system is not discriminating against at least salient groups those groups that are most notoriously negatively affected right of course you may never have a system that is not discriminated against any group right because groups could be as random as you know people that go to uh New Zealand for vacation are wearing red socks and are fans of I don't know what soccer club right so this may be this distinct group which may for some reason because it's a database system discriminated by that right you can't test for all of these all of these basically artificially generated groups but what you can certainly do is at least test for those that are inscribed by the law right because in the law you have certain certain categories according to which people should not be discriminated and at least for those you should check but even then it may be the case that you can't optimize for all of them to the same extent but at least you should lay it open so that it's contestable so it's the developers and it's the law and what other kind of agency would come would have to come into this discourse what you think but the developers are usually the ones who get a task that they have to solve and they have to optimize it in a certain way right you have the people who are basically buying a system and those who are selling a systems in those those are already not the developers right so this is the economic context in which these systems are being bought and sold and especially if the state is using those I think the standards must be even higher right the higher the impact is and also for the state and if of course you know um the the groups you need to enroll may differ depending on what is at stakes right if um so you have different have different representative groups that you may want to enroll in particular when it comes to high impact um tools right if you have a system that is discriminating against afro-americans of course they must be represented in in in some sort of committee addressing and testing this again I've got this uh a live music arena uh moment right now where I would like to ask the technicians to give me a little bit more of to the semen on the monitor that's what musicians usually do this is my moment I'm very grateful for it please turn her up on the monitor a little bit on stage it's hard to hear thank you so much so this is one of your key fields of course is trust uh in the philosophical sense in AI what we talked about now is actually trust into machines that do not discriminate as much as they have discriminated in many cases apparently now there's one very interesting notion you mentioned in the paper I read um and it is there's certain instances where if you're not to trust AI at all there are certain uh systems actually where it would be better to say no let's not build trust uh but let's not trust the systems at all so deny trust so to speak could you explore on that a little bit or what systems you had in mind when you developed the negation of trust actually I mean to begin with as a philosopher we're very skeptical of trust right so usually you don't want trust you want trustworthiness and you don't want people to be gullible and just trustworthy the last thing you want is blind trust in something usually a lot of you know there's also there has been very little interest in philosophy for a long time in trust sorry there has been little interest in in trust in philosophy for quite some time because it was more focused on knowledge and evidence and not being gullible and trusting people right so usually you should only trust those who are trustworthy right and those who are not trustworthy you shouldn't trust them so trust is not valuable per se but only if it's directed at those who end up to be trustworthy so what i'm focusing on both in research but also when it in general but also when it comes to AI is first establishing criteria for trustworthiness instead of just you know fostering trust because you know the downside of trusting somebody who's not trustworthy either because he's incompetent or because he's ill-willed they are disastrous right so you don't want to have trust per se but you want to make sure and say how much checks and balances do i need right the interest in trust just acknowledges that we can never know everything to the core at some point we need to trust either other people or we trust need to trust some evidence and at some point it must be good enough if we don't have certainty and where exactly this tipping point is when it's good enough to be trusted this is what i think is interesting what certainty is a very difficult category when it comes to statistical engineering when it comes to recognition patterns and so forth those systems are not about certainty they're about some equation to certainty or maybe you know but it's not identical to what we would call certainty and this is what trust is linked to right doesn't doesn't that make it tremendously difficult actually to talk about even trustworthiness yeah absolutely so i think that that makes the notion of trust so interesting it's because there are so little things that are certain that most of the stuff is in between right it's in between blind trust and certainty but how actually to discern this is interesting on itself there have been people who there has been quite a lot of debate of whether you can actually talk about trustworthiness when it comes to ai to begin with right because there is now this notion of trustworthy ai and everybody's striving towards this i think the notion of trustworthiness in regards to technologies only makes sense if you understand them as a socio-technical system not as a technology per se because you can't trust the technology you can rely on it right but you're trusting you can only trust let's say the socio-technical network behind it sort of like the institutions guiding it the standards enrolled the the mechanisms of accountability that are behind it but not the technology per se i would like to talk a little bit about the relation of the ethics of design and the ethics of use you hinted at that at one time of your talk in other words to put it quite plainly actually can you determine ethical use by ethical design no you can't of course not right i mean you can try your best and you can try i mean you you know you can you can mess it up and you can still you know let's assume you you buy a car right and when the first people bought a car they probably did not foresee all the mess that came afterwards right and still you know you can you may be able to use a car as an installation right and then the moment it's used as an installation it doesn't have any of the side effects it has as a car which used to driving from A to B so any artifact can be used and misused for various purposes so this is not in the hands of the people designing it nonetheless when you're designing things you're making you're basically you're creating affordances and constraints you're nudging a technology in order to make things harder or easier when you're designing a system you can either make it privacy friendly or not right and of course people can always circumvent it but it gets harder or or easier so of course you can never entirely forecast future use but you can make better or worse use more or less likely and that's the power you have which i think is already quite a lot but of course you know people can always circumvent your plans would open source be a concept that sort of enhances ethical design of technology that would make it more lasting so to speak in terms of ethical use yeah at least it provides one central component of more ethical development it's mutual critique right when you make something open people can check for it right and they can see whether it works and how it works and they can contest it and i think this is something that is very important and one of the cases where we have been debating this is for instance the corona van app right which has been a case where where there was a decision to make it open source after some detours at the beginning and lots of flaws but anyway let's look at the result at the very end and let's not look at the communication afterwards but the process of developing it was sort of like one of these cases where really public money was also spent in in on an open source project and it really improved through the feedback of others this does not mean that in principle this is ethical right but it provides some some characteristics that make it easier for flaws to be detected and also for problems to be detected and that already helps i told you i was going a bit a little bit slower than you truly so please give me those seconds what you talked about most of the time correct me if i'm wrong here is weak ai so-called weak ai and not strong ai strong ai being now here's the big word the singularity the superhuman agency so to speak that surpasses human capacity as a singularity we talked about you talked about very specific tailored applications you know like medical diagnosis cancer recognition newsfeeds credit scoring predictive policing and so forth so ethics probably make more sense with weak ai is it easier to build trust in weak systems as people who know we may know i'm not the biggest fan of all these debates on strong ai because i you know in a nutshell if i were to be honest i would say i couldn't care less right there's so that we have so many issues that we have to face before we can deal with some fantasy of singularity and strong air that i that i'd rather spend my limited amount of energy on these issues right and leave the speculation about strong ai to other people because it's a lot of fantasy of rich techno enthusiast of certain skin color and gender which i find a bit boring so let me in a nutshell i think they are more pressing and more interesting things to spend my time than strong ai and is it easier to work with weak systems there are no strong systems there is no comparison right it's not that it's easier just there is no such thing as strong ai so the comparison doesn't make sense what i want to hint at is that you would think if you compared like a human human bias with a built-in machine bias so to speak that the latter would be much easier to fix right i mean we've spent many thousands hundred thousands of years now not overcoming just terrible forms of discrimination of war of genocide and so forth apparently that is very hard to fix in the human dna now if you have a system the all the systems the weak ai systems you talked about i would assume as a layman it would be much easier to actually fix that problem or those problems you addressed because you can actually build them in there into the software is this correct to fix the systems for weak or strong as for weak for weak ai what the the problem basically is is that as i was trying to get at much of what is discussed now under the heading of ai is basically database systems that run on historical data so all the biases that you have in there are just going to be reproduced so you could of course in principle fix them but i still don't maybe i don't get the comparison with this i think it would be a lot easier to fix because the database are in comparison to the human mindset that's what i was hinting at to the human mind yeah of course you know if i could fix the the discrimination and the stereotypes of the rest of the world beyond the data sets i'd be very happy right but of course we can't do this but the problem is just because we human beings are also biased and are discriminating people doesn't mean we should be okay with discriminatory ADM systems sure but do you think there's progress there there is not in the human mind but do you think that uh there's actually a chance of those ai systems ai systems you talked about that we progress much more rapidly than we apparently have so on the good side let's assume first of all there is interest in in devising and discrimination aware machine learning so there is at least the problem sensitivity that's already something second you could of course you know the way you use these systems is pretty much open you know instead of just replicating the status quo you could use it for different purposes what you do with your system is up to you let's assume the system learns and i'm using this stupid example for quite some time that judges are harsher in their verdicts before the lunch break right and even if it's fictitious just take it as an example and if this is learned then you can either just reproduce it and replace the verdict by the judge by an equally biased system of the machine or you can make a pointer to the human judge and say look it's just before lunchtime maybe you want to reconsider whether you're hungry and how this affects your decision-making right and this would be more of a pointer where machine learning is used in order to improve the decision-making of a human by sort of like giving him or her feedback on what they what what the system has learned about the biases this is this is a more positive way of using this on the other hand right i'm not sure how optimistic i should be about the willingness of large portions of or the majority of the population to even get rid of that right so that's you know let's wait for the next elections and then we'll see another difficult question before i'd like to open this up to you and everybody participating through the participatory tool slide on also Twitter let's see what's on there with one big question before we do that i think is again back to trust i mean of course the series is a little bit about that also all the time that we trying to talk about the european role in this geopolitical race that is going on on technology right now on cloud computing and ai and so forth that you've written a lot about trust in ai from a european perspective also you worked for the data ethics commission and there's four measures actually that the data ethics commission proposes to increase trust in ai and of course this is a geopolitical assets to actually europe is trying to develop right and those four things would be one respect for the human autonomy two prevention of harm three fairness for explicability though that's plain ethics i would say the thing is how is this implemented what do you if you um if you advise like different boards if you talk to politicians how do you implement that how do you implement that so that that there is actually um you know pro european progress in order to survive this race in other words to put it a little bit dramatically i'm trying to distinguish two questions where i think we're underlying the one is um how do you do how do you make sure that these that these goals get accomplished and i think what you need to do is you need to make stuff mandatory right things doesn't happen if it's voluntary right if you just say you can do some auditing or you cannot do it that's not you know you're not going to get explainability and fairness and all these nice things that we should track for granted if you don't make certain auditing mandatory for at least um systems of high impact right of course then the second question is how does europe stand in relation to in particular currently china and the us and what could be this third way of trustworthy ai i mean i don't want to end you know shouldn't end on a on a very pessimistic note but we're not ending yet but there's hardly i mean there currently is no such thing as as european sovereign data sovereignty right because we neither have a sovereign infrastructure no data markets and it certainly doesn't help if you have initiatives which in principle may be valuable such as gaya x and then you partner with people like palantir right this this is not how this is going to fly um gaya x being the european cloud computing project that is not doing too good at the moment that's maybe just to summarize it real quickly the point is just you know if you really want to have an alternative you must design an alternative right and there i think on a on a local level you have alternatives where you have more um uh public open source open data initiatives in in in cities you need to come up i mean we tried we came up with some ideas about also data sharing between companies to what extent also maybe companies need to make data available especially large companies need to make data available for the public good um and i think these these are steps that need to be taken and there's a thin line between a new let's say regionalism and and closing basically your own infrastructure but at the same time also at least trying to have some um um yeah control over your technological infrastructure and not being entirely dependent on the the technological infrastructure on on let's say china and on the database on the data to to the us so you need to have to come up with something but i don't know i didn't want to end to be so negative um let's be a bit more optimistic there's still time we're not at the end yet there's a lot of questions to come you see me a bit frustrated on that on that debate because it's very often about this uh about this race between china and and the us in europe um and i think if if europe were to stand together and had some joint ideas about how to come up with a with a more public more open more transparent alternative it could be i mean yeah i mean we we have already to certain degree set standards with the general data protection regulation right so the question is what do the other upcoming acts what impacts will be you know first of all how will act such as the data governance act or the digital services act end up will they basically totally give in to lobbying um because then you can just throw them in the dust bin and you can't but at least you should try to make sure that it's not totally hijacked by by lobbying and then the question is to what extent can these um these standards also play a role worldwide not a very satisfactory answer i know thank you so much um before we uh move to the questions from the audience uh no i know let's the fire but the money to our business and i can't hear fast others from for me it's a free album that's not really very free thank you for the better thank you uh okay let's say questions from the audience i don't see anything is christian gruff vocal there uh with the microphone we start with the audience then go to slido i don't see it i'm sorry somebody in the audience has to do that for me okay there you go uh two questions uh did you follow today's hearing at the european parliament where vestaga was basically preparing the key things of dsa and what is the other thing a grand dma or whatever uh for one and if if not what would be the the critical ingredients from let's let's say an ethical perspective that should be included in properly in updating the dsa as it is at the moment so first of all i did not follow anything i was i was trying to get here on the train and that turned out to be a bit more tricky i have to say um so i mean i think i won't be able to get in a lot of detail to be frank right but i think the the underlying idea is um how do you make sure that that um the public good is basically at the the core idea of how to structure this this european data market right because there's so much interest um from i don't know i could be getting in me sometimes a bit even more um i should stop getting negative right but there is a lot of interest in in in using data for commercial purposes and i think um that's partly fine right um but i think there should be a much larger thrust in in using it for the public good and that's not very detailed but probably as as detailed as i can be for the moment this one here up front i don't see anything of the audience you have to handle that i'm sorry it's too bright thank you thank you for this really interesting talk you that um i my question refers actually to this compass software we are used to um i would say fairly inconsistent verdicts of judges how come that we have such higher expectations for software and that's the first question and the second if it's true that there is no ultimate level of fairness as um as you you mentioned and we have also seen in in various practicals examples what would then be the standards that we apply or should apply both to human beings and AI based decision-making systems are these the same criteria or should they be different i think the basic requirement that we have both for the should we that we should have both for the for the judges and the systems is impartiality right i mean this is sort of like the norm that that we're expecting them to strive for and of course we know that many of them are not impartial and may not be have their own biases but that does not diminish that this is the highest norm that we have for those and i think we would be it would be really a major problem if we were to give up this even as a goal for ADM systems just due to the fact that humans are not fulfilling it right because because if you basically say we're not even striving for that for ADM systems then this is this is flawed to begin with i would say that's my first answer the second is if you cannot optimize a system for all groups right you should at least but the least thing that systems employed in such a sensitive issue should be required to do is to demonstrate what methods they used and what they did in order to check whether or not they are discriminating against gender race religion these salient categories that are protected by the law right i mean this i think should be should be required that this is laid open and once this is open then at least people can contest it right even even if we can't make it perfect we can at least say we we tried our very best and here you can see what we did and if you know something better please let us know right and the third thing is also if you have to decide between let's say different ways of optimizing right because at some point you may have to choose are you optimizing you know what what is the the best you can get in that particular case and i think then what what needs to feed into that decision is who has been least advantaged previously right and the so basically to an extent this this would be Rawlsian to say those who are most severely affected for them um you should look at those and try to make make up for this basically right so and there is no you can't you can't say this for any you can't say this without taking a look into the into the culture and the society in which the system is being used because you need to check basically who are the most marginalized or most negatively affected people previously let's say in the justice system and probably there are statistics on that and that's probably what you should try to alleviate to a certain degree was that somehow already answering isn't it also a question of not only looking at the marginalized but working with the marginalized as in uh having having them having their say um in policy of course this was what i was trying to refer to earlier also in my talk that you need some participation right but in order to figure out who all needs to participate you should probably also have a look at you know and what's happening in your in your country and you know data helps but also education helps and history okay is there another question from the audience at the venue before we go to the digital tool and see what's up there is there one more please so um thank you again for the talk i'm here okay okay so yeah in my bubble i also just hear negative things about AI in general like i'm wondering where's all the good stuff um my question is um is in regards to uh explicability so um from what i read and and watch you have this problem that if you want a precise AI system that like has a high rate of precision it will be so complex that it's even if you lay it out it's hard to explain why the the AI system took this decision and then there's another argument which says even if we open it up and we make it open source so you can look into it and see why the decision has been made it would open up the door for reverse engineering for people taking advantage of this understanding how the system works and understanding how to bypass the system i'm sure you hinted at these topics in your talk but my question would be should we buy these arguments like are there positive examples where these arguments are actually valid or is it complete um yeah just nonsense no they they are they are partly valid uh and the question is are they made in order to deter people from making things explainable or are they just acknowledging some some limitations right because i think of course um uh you know what is used under the heading of machine learning maybe more or less complex right and and if it's if it's really complicated deep learning systems then it's really difficult uh if not impossible to figure out how exactly a system came to its conclusions right or to its predictions basically and then of course you may make certain steps more explainable by by basically documenting different thresholds and the question is but then at some instance you may have to weigh accuracy against uh explainability right and you may have different different domains in which you can accept that stuff is not explainable if the performance is good if the accuracy is very good and in other domains you may decide here explainability and giving reasons for how why a decision was was made this way is so important that we can't rely on on on systems that we don't understand right and this decision in itself is an ethical one right to to decide where do we need explanation because it's it's very important and where do we um where can we give up on explanation because the accuracy is higher just think of medical diagnostics right what what would be you know what would be an adequate threshold if you have to weigh these tools if you if you have a perfect tool let's assume you had a perfect tool that is that is distinguishing um cancer uh cancerous and and and healthy tissues right maybe you want to give up some explanation in just to increase the accuracy but maybe in other domains you want to be able to give some explanation so there is a so I think you can work on this threshold uh but it's it's also costly it's it's you know it doesn't come for free explainability so you have to think about where it's important where it's essential and when it comes to reverse engineering of course you know this is what this is what you all have in in software optimized sorry in search engine optimization the moment people understand how something works they can also make it work for their their purpose and then it's basically it's it's the same in security research is a bit of what you call this um vet camp so you know both sort of like people are fighting against each other and you always have to keep up with it basically there's I think no way of circumventing that thank you so much let's look at our tool at Slido or Twitter who can tell us something about that no question then could I ask one more question if it's short they actually really wanted to talk can you just wait for five minutes because we're just right there um listening in to Slido and Twitter but I come back to you can you raise your hand I didn't I couldn't see where you were sitting okay can you hear me yeah and there are a few questions on Slido I'm going to start with the first one which is um I work in academia in a computer science department our staff is about 80 male and white and also very uninterested in these topics how do I engage people typically not affected by discrimination to care about ethical issues um I have to I have to say I'm quite spoiled um because in in Hamburg I think there has been quite a high interest both on the um from from the faculty but also from the students um so from you know I can't say from experience how you get get it started but it takes some time so what helped um to get this I think you need to start first also with the students right and the first way to get them is you have if you get to know them already in the first semester and they get interested in your work because what we really realized in the last years that um now that I have a mandatory lecture in the first semester we end up getting more and more students and all our other courses so it's a bit of bootstrapping the system if you wish um and for the um yeah for for the for the faculty I think it really depends on the department and I don't have any good advice on how to um how to convince people because it is really true if you don't if you don't see the issue because you're not the one affected you may be less interested um but I actually do think that there is an increased interest at least from from from younger students I think there has hardly been as much interest in ethical issues related to technology as in recent years um um yeah so as I said that made me not very useful because um I really felt there is a lot of interest but if you have to bootstrap it I think it it can only be by trying to point them to some of the flaws that have been happening to use maybe some of these cases um and explain uh how high the impact can be of of um people always think about other tech other people's technologies as being biased and not necessarily what they are doing in their own research I think this is the the most interesting part I find in working with my colleagues that it's always harder to point to potential biases in people's own research than in research or tools that you find elsewhere can I give the easy answer here of course just tell them that uh discrimination is a software problem I think awareness will be much higher in no time when it comes to your colleagues is there another question from Slido maybe a couple or from Twitter I don't hear you um there you go yeah okay another question from South America is um from the point of view of social ethics what do you think about the massive number of jobs being replaced by AI that is of course yet another issue that I didn't even touch um so the what is always quite interesting is that when people talk about um automation and loss of jobs there there's always the reference to this has been happening all the time and there are usually new jobs coming up and old jobs disappearing and the same may be true uh now with AI and robotics the problem is also just that this doesn't help the person whose job has been replaced if they don't get trained to do something new so I think there will be massive transformations it will also affect people who have previously not been affected so much by automation um and I'm not I'm not trusting talking about trust uh the narratives that we will have so much spare time if all our you know jobs will be done by machines because usually that time gets quickly exploited so I think for that we just need very strong um social security measures to basically uh counter this um for this time being when there's you know a change in jobs and stuff like that to to also help people transitioning to potentially new jobs and there is another question from Twitter from Joshua Allen how should ethics be designed for AI systems that work on a global scale and who should design it yeah that's that's a very good question the problem is really that um as I was hopefully trying to make clear at least implicitly is um that ethics for me is not you have a fixed set of rules or guidelines and you just apply them but rather it's about negotiating and trying to find out um um what is right and what is wrong and what your reasons for this are and of course you know we're not living in a worldwide democracy um and that of course poses its own issues I think for those companies who now have a worldwide influence the least they should do is and that's a that's a delicate balance right on the one hand side you don't want to just um bring your own values with them and think they just work everywhere right so you need to attune to a certain degree to the local specificities and and some of the the flaws and made in the flaws is too too soft but some of the major scandals of Facebook for not taking into account how their technology may be misused in certain uh in certain countries is a major issue right but there is no there is no one there is no one size fits all and there's no easy solution and how to strike this balance between sticking to your own democratic values and sort of like going to a certain degree local um but I think the the bottom line threshold should be democratic values guiding your technology design um just one more I think before we uh go into the venue again and okay um how do you foresee the role of explainable AI in providing reasons for decision-making processes to what extents uh to what extent are companies accountable for the decisions of their MLAs even though programmers might not understand outputs so there's there's actually a difference between um explainability and accountability right you can be very much accountable for your software even if you have no clue how this thing is working just by the sheer fact that you are deploying it right so the question of accountability must be disentangled from the question of explainability even if you don't explain it you may still be held accountable so lack of explainability cannot be a reason for not being held accountable for what you're doing there are sometimes debates and self-learning systems to what extent the people developing it and to what extent the person deploying it are responsible for what let's say for harms that are occurring if a system is continuing to learn during a process but some person has to be made or some institution has to be made responsible and accountable for that even if they don't understand it and then at least you know they must they must be accountable for their decision of not having an explainable system right so um so I think explainability is important it is important in certain areas in which we think explainability or explanations are important enough for specific reasons either because you uh let's say you want to give somebody a reason or you're obliged to give somebody a reason why they were denied bail or denied credit and we may want to think this is important enough that you can give reasons and then you may say well then we can't use deep learning in these contexts right if this is really essential but then you may have to be obliged to use other types of systems and others you may want to be may be able to give up on explainability but this does not dissolve the accountability okay thanks a lot I think there was another person wanting to ask a question here from the audience again I didn't see it please speak up oh I see you now hi thank you so much for taking that question and thank you so much for the talk I'm wondering about one specific use of AI which is face recognition and given the challenges or the potential of misuse the technology I'm wondering a if you would argue for banning it completely or in specific contexts and be what are ethical considerations specifically for that use of technology that is indeed an issue that that we've been dealing with quite intensely in the last semester also because we had a number of of guests in our own lecture series addressing facial recognition I think it is facial recognition widespread facial recognition in public spaces is something that should be banned for the reason that it makes you very vulnerable and the argument that has been made in particular by by Evan Selinger and Woody Hartsock who are also arguing for a ban of facial recognition technology is what is sometimes often considered to be a bit infamous or a fallacy is a slippery slope argument by saying that we get so accustomed and normalized to facial recognition through our mobile phones that we use it all the time that it basically doesn't feel like a big thing when you're using when you're holding your face into some you know for identifying yourself but the problem is really on the one hand your face is something that is is as I you know as as unique in identifying you as your fingerprint but it's also much more expressive right it's something that that in our I mean just think about the debates that we're having now when we are wearing masks and the downside of not seeing each other and that gives you an indication about how important the face is so we don't want to run around covering our face just because we want to be safe from facial recognition so I think I don't see any reason for having widespread facial recognition but the moment you have cameras where you can just upload it as a software it's just this easy tick of making or an existing infrastructure which you have the cameras to just you know upload the software and then you have a perfect surveillance system so I don't see any reason for that on top of that there are you know additional developments in terms of emotion effect recognition gate recognition which I think it rolled out widely are deeply problematic I don't think I don't think a ban you know the banning for me refers to the usage in public spaces not necessary but not to doing research on certain issues but I think there's I would I would certainly um uh and have subscribed a ban for banning facial recognition in public spaces public faces places sorry I have two more short questions to end this but before I do this I'd like to still ask you one last time if we have questions from the floor so to speak this is your turn thank you again for coming out I still can't see you but maybe there's another question coming up anybody is there another question from okay there's one and yes thank you for for the talk you know um you wouldn't organize a discussion about the ethic use of the nuclear bomb now you would just assume that there is no ethic use of or or good use of bad technology and here we are now discussing automated decision making decision making which we cannot attribute to a person which cannot be for which a person cannot be accountable so I'm wondering how how can that be at all ethical if I do not know who has taken a decision and why that decision has been taken who is accountable for it so there is no as far as I can see no good use for bad technology even in this case so my first reaction would be exactly the case of nuclear technology was one when technology actually technology ethics really took steam right I mean basically um the the nuclear bomb and the question of whether technology is neutral or not this was one of the first cases whether it makes a difference whether you use nuclear energy for energy creation or for atomic bombs and whether it's just the usage of technology or whether it's the technology per se that has ethical impact that's the first side answer to your question the second is there is always somebody who makes a choice right artificial intelligence or ADM systems are just not there right somebody made a decision that this is supposed to be automated and it's supposed to be automated in that particular way and whoever made this choice of either designing a system in a particular way of using a system or of of ordering a system to to automate a specific task is in charge right and if this person decides that it's fine to go um to to have a uh this based on machine learning and not explainable um then it's fine as so long as people are not obliging that person to you know that we have requirements for a system to be explainable but there is never uh nobody responsible there's always someone responsible right there is no and let me stress this there is no um lack of responsibility it's just you know you need to make somebody responsible uh and that's uh for designing and or using a system thank you for this very interesting question I think it tails maybe another question before the very last questions and this would be again put quite plainly is there a difference between ethics and AI ethics well um ethics is as you know I was trying to be very blunt by saying you know ethics is about what's right and wrong and what's good and bad and what the reasons we have for that and we can apply it to AI and we can just apply it to something else the only difference may be that all of a sudden uh we are trying to also delegate ethical decisions to AI and that's sort of like a second order thing right so that's why it why AI ethics way may be a peculiar type of technology ethics because it's not just the ethics of some technology that's being used but it's also about delegating ethical reasoning and ethical decision making into tools and that makes it more interesting but in principle the issues of just you know what what is justice and what is fairness and you know they are as old you know they are older than AI let's put it this way when we talk about you know it's always the R question at the end R for regulation or policy we talk about two different levels or we have talked about two different levels well three actually on a global level on a European level in terms of the European commission and also of course on a on a national level on the chairman level you've advised many politicians you know the Deutsche Etikrat the ethics commission of German federal government and so forth so could you map out for us maybe for the very end how those two trajectories differ a little bit on a European level and what the German government actually has in mind when it comes to regulating AI I'm not so sure what the German government has in mind and that's an answer I'm not even sure it has a mind because you know our mind is something usually only individuals have no but more practically speaking I think what you could see let me start one step back there have been lots of committees and commissions and there has been lots of advice on what to do by digitalization in Germany and let's put it this way it could have been picked up faster and more seriously and it's not really that you know the German government for recent years has been very much at the forefront of digitalization so I think whatever they do if it's not no I'm not I'm not using quotes from the last from the last elections so let's let's wait and see what comes out of the next elections my hopes are high because there has been lots of stagnation in recent years but the problem is also of course you have different ministries with very different ideas of what to do about bay I some see it basically as you know you will get very different views if you look at it from the perspective of the ministry of justice and consumer protection or if you look at it from the from the ministry of the interior of the ministry of of research and I think you know figuring out these tensions about how to make AI useful for the common good profitable for industries as well right but also protecting basic liberties and civil rights this will be this is what I'm expecting to from from the new government to come up with some some vision on how to do this and not just sort of like delegating the digital into the supposedly still new just as a very last notion there we've at this very interesting discourse on explicability someone called you called it explainability you differentiated it from accountability again where it comes to me AI systems would you say there is a regulatory way to actually account for those categories on a national level or on a European level you see policy that would actually guarantee let's say explicability of certain AI systems you refer to I think you know I think you need to I mean what we advise I'm not sure this is answering the question if I'm going off just let me know but what we advise also in the German data as commissioners to say look you need to decide which systems are of high enough impact to warrant which amount of scrutiny right there may be lots of systems where you don't need some extra regulation because they are either covered by the GDPR or by other types of systems but there may be others for which you need new regulation and then it doesn't suffice if this is just a German law but it needs to be a European law right and something similar to the GDPR just for algorithmic systems to give us an example what kind of system that will be well something like this this this compass system right if you if you were to use this but also systems that you use for for you know giving people credit or not giving people credit right I mean you do certainly you must check that these systems are not systematically discriminating against people if they come to very basic foundations such as credit housing you know all these basic things is in particular of course if the state is a provider social welfare is a massive arena where a database systems are currently you know or at least increasingly getting deployed and of course you want to make sure that these are properly audited and this you know and this is something you need to mandate and say this is this is the level because it's either you know systems that are either obligatory right these are the red flag systems because if something is obligatory for every citizen they must be open and tested thank you so much Judith Simon we see each other in November we don't know exactly when what the date is going to be there's going to be a fourth session of making sense of the digital society sometime in November we'll let you know but for now thank you again for coming out after all these years I was going to say after 18 months thank you for being with us the terrace is open and thank you Judith Simon for being with us from Hamburg Judith Simon