 Thanks very much everybody. I think we'll start, are we okay to start Larkin and we're going to bang exactly on three. Yeah. So welcome everybody to this IIA event. I won't call it absolutely a webinar because of course we're delighted to be joined by people here in the room. So we've got people in person and we've got people online. So we've got a combination hybrid. So welcome to you, whether you're watching, viewing from your sitting room or your office or here in the room with us in Norquay Georgia Street. We're really, really delighted to be joined by a most distinguished guest speaker and Professor Michael O'Flaherty, who is, as you know, director of the EU fundamental rights agency and as I say we're pleased to have Michael here and thank you in advance for giving us your time this afternoon. And to speak to us for maybe about 20 or 25 minutes or so as you choose in around that time. And then we'll have an opportunity for some questions. I sometimes say questions or observations but it's, you know, it's, we like questions but we tolerate observations as well. And we'll do that once Michael has finished his introductory remarks. If you're here in the room it's easy enough you put up your hand. Preferably tell us who you are and if you have an organization, you might just give us that designation just as we know. And if you're online, use the Q&A function on Zoom, which everybody's very familiar with at this stage. Again, telling us who you are and what your designation is if relevant or if appropriate if you have one. So you can send in questions if something might occur to you, even the first few minutes as Michael speaking please you can put the question in there particularly obviously if you're online. And those questions will build up and we'll get to them. The Q&A is on and the presentation are both on the record. We can use Twitter as well if you're that way inclined to handle this at IEEE and we'll be live we are live streaming this afternoon's discussion so welcome if you're joining from that particular on that particular platform as well. It's great to have you with us. Michael O'Flaherty is director of the EU agency for fundamental rights, as I've said, previously, many of us will have known Michael. He's an established professor of human rights law and director of the Irish Center for Human Rights at NUI Golad in the National University of Ireland in Galway. He has served as chief commissioner of the Northern Ireland Human Rights Commission, a member of the UN Human Rights Committee, and head of a number of UN human rights field operations. I'd like to talk to us broadly speaking today on the topic, protecting human rights in the digital age, and there is so much happening and there are so many developments that it's almost overwhelming to try to keep a we just had a session at lunchtime on energy and retrofitting and that caused to touch on artificial intelligence and the impact that it's likely to have on the energy sector. I was at something yesterday morning in relation to the health sector and we can see the enormous advances and potential associated with artificial intelligence in the health sector, but all of them have associated with questions and risks for individual rights, human rights, and I know Michael will touch on some of those issues in his presentation, but without anticipating it any further and without telling you that he's going to say things that he's going to deal with things that he doesn't end up wanting to talk about. I will hand the floor to Michael. Thank you very much. Thank you very much indeed. I really appreciate the invitation. It's my second time physically to speak here in the third time if you include a virtual event during COVID, but it's always a great pleasure and I want to thank. I want to thank not only the IEA, I always feel like it's an elocution class when I do that. But thank you to the IEA, thank you to you who are here physically, and thanks to everybody who's with us online. And a good few years ago I was sitting on a hotel terrace, looking across the lawn. And as I sat there I became aware of a little machine going up and down the grass, cutting it. It was my first encounter with a robot lawnmower. And I watched it transfixed for a good half an hour. I might have began to feel sorry for it, that it was doing this job with no gratitude, no recognition, no encouragement. I had to restrain myself from going over to Pat it. But what I remember about that moment above all is a genuine sense of awe. It was my first encounter with robotic technology in any meaningful way. And I was deeply impressed. Now, much has moved on since that first encounter, but I never ceased to be in awe of AI and its potential for human thriving and well being. I was on the island of Lampedusa five days ago to get a better understanding of what was happening with this huge arrival of asylum seekers that's a live issue right now. The situation is dreadful. I'm not going to pretend otherwise. The Italian authorities and civil society were using tech to try and put some order on the chaos, and it was deeply impressive. The way AI driven technology was helping them to, to some extent cope with what is an impossible situation again AI for good, something I found awesome. But of course, given where I work at the EU fundamental rights agency, I'm no less aware of the risks that AI poses for us and for our societies and through the work of the agency. Let me illustrate just illustrate just give you examples of five contexts where AI can get it so badly wrong. The first is the well known one of discrimination. AI hoovers up every fact every datum in our world with all of the discriminations, the hatreds, the biases to be found in that in the data. And this is well known, nothing new to say it now, other than the extent to which the the beyond mistakes are biases in data. There's also the astonishing extent to which data is mistaken. We've researched this in our agency, for instance, looking at large scale data sites in the migration context, where the level of error is truly shocking. Really consequential stuff, like putting the age of an adult for a child with all of the consequences for the child of being registered as an adult. So it's about bias, it's about mistake. And it's also about something very specific to tech. And that is the the role of feedback loops and the extent to which feedback loops can can can can enlarge error and enlarge mistake over time and practice. We've looked at that in the context of online content moderation automated content moderation and we've seen how a piece of technology that begins benign and does its job relatively well can learn error and then expand the error with with some pretty remarkable consequences. Just to give you an example, we recently did research on online content moderate automated online content moderation where we developed algorithms and then tested language to see what would happen. By the again it's well known to everybody in lesser known languages it was a disaster but that's well known, but in English, we put in terms so we put in the term. I hate Jews, and the online tech did its job, it flagged this as problematic speech, exactly what was intended to do. But then my colleagues put in the words, I hate Jews love, and the stupid machine passed over the term it didn't flag it as problematic because of the power of the word love, and of the associations of the word love, which somehow according to the I hate part of the phrase. So again an example of a unique as something rather specific to the online sphere in terms of how error can multiply. So that's the first worrying area discrimination and all that's related. The second has to do with the fact of who dominates the tech world the private sector. There's nothing inherently wrong with the tech world, owning technology, but there's something inherently worrying when that isn't the context of something that is so profoundly impacting our lives. In a context where again we know through our research, what are the primary drivers for much of the private sector in this in the development and advancement of a technology which has such a huge impact for every single person here today. One important driver is efficiency. We again through research concluded that the that the most important motivation for investment in technology is to do things quicker and more efficiently. Again, nothing wrong with that. But if you think of what that could be at the expense of that, then we see a worry. Another driver no surprise is profit. And again that that can be of some worry in the context of the impact on our lives. And then a third driver, among a few others I suppose is the idea that the private owner of technology has of establishing some idiosyncratic world goal. And I don't need to give examples they're pretty replete right now. I like fentanyl tools reference this morning the Irish times to a certain man child who comes to mind when you think of the use of tech to follow a very personal vision of what the world should look like. So the second concern, the role of the private sector. The third, the third concern is the exact converse. And that's the extent to which I enhances the power of the state. And again, not inherently problematic, at least if you're not in a state that is, that does not respect democracy. But again, I don't need to give examples. It's perfectly obvious how tech in the wrong state hands can be a tool for repression and oppression. One of the five concerns I would like to just illustrate my worries with is the somewhat more apocalyptic one of the transfer or the outsourcing of decision making to artificial intelligence. I want to give very many examples here but again the obvious one is well rehearsed it's not unfamiliar is autonomous weapon systems, which of course should can does and should strike fear into the heart of anybody who's concerned about the well being of our world. One of the myths of these illustrative examples of why we should be worried. It's something a little bit harder to pin down it's broad, but it's the erosion that we've come to understand over time, the erosion through the application of AI of our social society, the degradation of the human community. And in the sense that so often today, and far more likely in the future, we're dealing with a machine, not with the person. So often, what is presented to us as our preference has been decided by a machine, not a person certainly not by me, and so on and the same time psychologists and others are speaking of the risk to mental health of this this phenomenon of the automation of life. So the concerns such as these inevitably lead us very quickly to the question of how we tame technology. We all accept that we need to tame this awesome power that has been developed. What should that look like so that the technology is in the service of human well being. There are a few frames of reference for how we begin a discussion of how we tame tech, but two of the most prominent are through the invocation of the language of ethics on the one hand, and the language of human rights on the other. Now it's very good that this is the starting point for most look at the islands AI strategy and it indeed locates the reflection on the future of AI, in the context of the application of ethics and respect for human rights. This is all very welcome. There are some concerns in there. And those of us who work in human rights have been taken aback, maybe not taken aback but disappointed by the extent to which the ethical discourse has until now and I would argue still dominates as if somehow the ethics and the rights approaches were contesting and we must and we must fight our corner so that we dominate and to some extent ethics has been the more successful. I don't have time to go into why again just very briefly. I just want to help but think that ethics is an inherently subjective area where my sense of right and good does not have to be the same as your sense of right and good, and therefore, using ethics to frame the taming of technology allows us a tool which can be which can be sent in certain directions to achieve certain outcomes. This is not to diminish the importance of ethics. It's more to understand why it has dominated as the discourse. Turning to the other frame of reference human rights. Here, we see something rather more rather different, a far more if I may put it like this, a far more sturdy infrastructure on which to base standards and practice. I'll give you some examples of that in just a second, but my concern, as you'd imagine, somebody working for an agency of fundamental rights is to ensure that that human rights frame is put at the center not to displace ethics. There's not a competition but put at the center to help us figure out an appropriate and useful way forward. When we do that what we're actually seeking to do is to take the rhetorical reference to human rights that you will find in almost every strategy on human rights that you can read to take the rhetoric of human rights and turn it into a reality. What would that look like in practice. But before I get to the reality of what a human rights approach would look like in practice. Allow me just a brief word on human rights more generally. We celebrate the 75th anniversary this year of the adoption of the Universal Declaration of Human Rights. The best effort by humanity, coming out of the horrors of the Second World War to define the minimum standards for a society where we could thrive and mutually respect each other. The Universal Declaration has been repeatedly reaffirmed universally. This year we're celebrating the 30th anniversary of something called the Vienna World Conference on Human Rights, which was a solemn rededication of every country on earth to the Universal Declaration of Human Rights, and what it stands for. This is the reality of an instrument that was not just stated to be universal from the outset, but was of course universally negotiated mirrors from from complex global negotiations that affected the different world ways of thinking. It's a very subtle and sophisticated system that has derived from the Universal Declaration and all the treaties that followed, notwithstanding popular misconceptions. It's rarely about absolutes. It's actually pretty pretty insightful in the way it allows rights to be limited in the interest of the public good. We saw that for sometimes for good and sometimes maybe a bit too enthusiastically in the context of COVID, but that period neatly illustrates the extent to which the human rights system accommodates extraordinary crisis and issues and the, and in the public good it allows for the restriction of rights so it's a subtle system. It's very well supported, it could be better but it's well supported nationally and internationally by courts and oversight systems. The Universal Declaration is incorporated into domestic law of many countries around the world, therefore national courts uphold it through various systems since then the European Union and others. The Universal Declaration is relied on is invoked these rights play an important part in the Irish courts, and of course internationally. We have the European Court of Human Rights we have the, to some extent you could describe as a human rights court the International Criminal Court, and we have the myriad monitoring bodies of the different organizations. In this system these human rights standards, which is immediately relevant in the context of AI, and to use the phrase that's often invoked in the UN human rights apply as much online as offline. So there's no jurisdictional dispute about its application in this context, and very importantly, as I said earlier it's binding on states, unlike ethics, it's not a voluntary take it or leave it buffet I'll take that I'll leave that. And it has as its goal and this is its beauty and its power. It has its goal human well being article one of the Universal Declaration describes human rights as being about delivering for that all people are free and equal in dignity and in rights. So there's this astonishing sometimes described as humanity's modernity's greatest achievement this astonishing achievement of our societies. And the question arises of why it has been so peripheral to the discussion about the restraining the taming of artificial intelligence. There are many reasons for this. I've alluded to some already, but one that's very important and has preoccupied my agency for the last seven years, maybe eight has been that we have failed to show in concrete Grilled down manners, how the human rights standards and systems apply in the AI context. We've been great on the rhetoric. We have not been so good on the drilled down. This is what it would look like in practice. And so drawing on drawing on a work of my agency. I'd like to leave you today briefly, I hope, with seven elements of what the drilled down look would be like in the specific context of the now of AI. And by that I'm referring to this regulation building moment, we're in the law writing moment for artificial intelligence. It's very exciting. If we get it right. It's crucial that we get it right. But we could get it wrong. And when I talk about the law building moment. I'm referring in particular to the development in the EU of the AI regulation, the AI act. Which is still a draft is still a process incomplete. And the less developed but ongoing process at the Council of Europe of the development of an international treaty on artificial intelligence. So what are the, what are seven of the things, the key things that the drafters of all such laws must keep in mind. I suggest those to you. The first is that we have to make sure that our laws are comprehensive that we develop loophole free regulation. What would that look like in practice. Well in the first place it means we've got to get a broad definition of AI. We can so reduce the definition that we skip out on loads of practical applications. And the risk that we would narrowly defined to exclude such things as the databases on our borders, because they're very basic AI, and they could be missed. If we go for an over sophisticated definition. We have to make sure that our regulations equally apply to the private and public sectors. Again a given you might say, but there is in some context some pressure that to exclude the private sector so lock in regulation on the state, but leave a liberality for the private sector. When we lock in the private sector we have to lock in all of the private sector. Again in current discussions in some places, there's an argument that we should only do heavy regulation of big tech. But the analogy that I use is housing building regulations wouldn't it be a funny world where little houses didn't have to comply with regulations but big houses did. I mean it's just the roof is likely to fall on you in a little as in a big house. Another another another concern has to be that we ensure that all of the impacts on human right on human well being that we can identify as somehow captured by the regulation. This converted into human rights terms. This means that regulation must embrace standing up for all of your human rights, not just certain of your rights. Most of the discussion until now has been protecting your privacy rights. And this is completely normal because it's all about data. And the first thing that comes to mind is privacy, with regard to our data. So yes, we need a focus on privacy becomes so much more. Look for example at where the scandals have emerged in recent years around AI. I think of the social welfare scandal in the Netherlands a couple of years ago, where thousands of people were ordered to recoup large sums of money to the state that they were allegedly erroneously paid. And it was, and the bias was massively against people from ethnic minorities. And the whole thing was based on an erroneous application of an algorithm. So again, every aspect of our life can be engaged. So that was in broad terms and quite a lot of elements, the need to have loophole free regulation. The second thing that's essential, if we're to meaningfully protect our human rights in tech law is that it that this law provide for human rights compatibility testing for high risk applications. It's imperative that where an application is of high risk for human well being that it be tested so that we can understand what the risk is and manage it. Now there's two very important considerations here. One is with the with the explosion of general application AI. We're reminded that testing of AI must be use case based. We're not enough to test the app the day it leaves the factory, regardless of how it's used. We have got to test it in the use context, because only there will we see the risk to human well being. The second dimension and again we know this from our research is that and because of this phenomenon of feedback loops and the manner in which mistakes can multiply and grow over time in the application of technology testing needs to be repeated. Get into your use context, test once and assume all will be well forever. The science has now emerged to show that that is not guaranteed. The third dimension of effective oversight has to do with the need for effective regulation, excuse me, has to do with the need for strong oversight. Again, it might seem obvious, but it's very important that attention being paid to ensuring that the systems in place to oversee the regulation are adequate to the job. They need to have the skills they need to have the resources. If they're protecting human rights, we need human rights specialists working within those systems, not just privacy people but all human rights. We also, we also need oversight at scale for the scope of the challenge. I get a sense as I travel around Europe that it hasn't dawned yet on the designers of systems, quite how broad and demanding will be the oversight that that will have to be put in place. The fourth of my seven is with regard to a fundamental principle of human rights that every violation should come with a remedy. And so therefore we need to make sure whether in the regulations we designed to tame AI are in separate legislation, which is the EU model right now, that we ensure that there is a pathway to a remedy for somebody who's, whose, whose, whose human dignity has been violated by an application of the technology. And then the fifth, which could have been my first and could have been my last, because it's so absolutely central to the delivery of all of the other dimensions is ensuring transparency. The, it is vital for proper oversight, a proper monitoring of technology that there is transparency as to the contents of the technology only then can there be effective oversight. Now, as you can imagine, this demand for transparency is met with a lot of resistance. Give you two examples. One is, it's just not possible. We don't know how the tech reaches that good outcome. Don't touch it. I've heard that many times. I've heard that at a medical conference that's from a doctor, a research, researching medical doctor who made more or less exactly the remark I've just made. And we would argue that that's just not good enough. We recognize that there may be huge complexity in terms of effective delivery of transparency, but at a minimum, in the context of tech where we don't know how it works. What on earth is to discover this to stop us demanding that you the designer of the tech that you describe what you do that you tell us how you've tested your technology that you show us what data you've entered into your technology. We're already a long way at that point towards what we need for oversight. The second, even less convincing argument about transparency is, it's a secret, we can't tell you. And I'm not referring here to commercial secrets. I'm referring, for example, to national security secrets. And again here, it's really it's a false argument, because in many other sectors over generations, we have found ways to put in effective oversight of highly sensitive context in a manner that does not compromise secrecy does not compromise confidentiality. I think, for example, of the way in which we have designed judicial oversight of national security systems. And so by analogy, we have plenty of examples to counter that objection. The six of my seven considerations is with regard to the need for continuous dialogue. Dialogue is not just a good. It's a necessity. As we continue to work our way forward in this whole new world. We need everybody on board to figure out the right way to go. And so in the design of the regulations in the rollout of the regulations and the application of them in their future amendment. It must be all on the basis of a rich living dialogue across all of the relevant stakeholders. There are many of them. But let me today focus on civil society. By the way, as a general observation, I've yet to find a single human rights innovation that didn't begin with civil society, but more narrowly coming back to the AI context again. We would be lost today if it wasn't for the shout outs and the, the warnings and the advocacy of civil society so far. It's an astonishing job in educating us all, including people like me as to the scale of risk and the need for high attention. And so involving civil society for its advocacy function, but also its expertise is critical. There are many other civil society actors, but let me just mention one cluster that I think is neglected. That is the cluster of national human rights authorities in the jargon we say national human rights institutions here in Ireland, that would refer to the Irish human rights and equality commission. But these bodies everywhere need to be involved as well. They are the centers of human rights expertise, unique centers of human rights expertise in our societies, they have to be also part of the conversation. And just before I move off this point. And again, I've had much personal experience of this, and that is that dialogue can be very difficult. Quite simply, we often all speak different languages, and we don't understand each other. I learned this in a different context years ago trying to talk to economists they didn't understand me and I didn't understand them. But I think it's even more challenging in the context of technology. Why should a tech engineer be expected to understand my human rights language? Why should I be expected to understand his or hers, but we have to try. At least we have to find a common vocabulary in which to engage with each other. The seventh and the final of my considerations is not about something we must do, but something we must challenge. And that is that we have to challenge the very frequently invoked argument after somebody like me gives a talk like the one I've just given. Oh great fine and well, but that's going to stifle innovation. You're going to cut off innovation and China and all these other places are going to leap well ahead of us. Well, you know, that's easily said but I challenge the proof of it. We're not convinced at all. We are able to ensure attention to human rights while still deeply respecting the need for innovation in our societies in our in our in our business world wherever else it is. And here are some of the ways we can do it. One is the way we are currently developing regulation in Europe using a risk pyramid model to cut it to get to express it very briefly. The pyramid has the most risky stuff at the very top band, then a very big band area range of high risk applications, which are subject to quite tight regulation and proposed to be subject to quite tight regulation, and then a vast space at the bottom of the pyramid with the pyramid with benign AI applications, the famous talking fridge sort of stuff, which, which nobody can find huge risk in and and which would be subject to a very minimal human rights oversight. A second dimension of how we can make sure that innovation is not unduly restrained is by doing sandboxing of the interplay of AI and human rights, never compromising human rights they shouldn't be in play they shouldn't be in negotiation. But just seeing how you can actually do the fixes. And I very much welcome that the current Spanish President presidency of the European Union is heavily promoting the concept of of ethical or normative sandbox exercises. But again, as I say, such exercises while very welcome must never be at the expense of the standards the standards are not in negotiation. And by the way I would say also as my final argument to those who raised the innovation argument is well you might want to think of the trust argument, because there's no doubt and I've yet to see anybody convincingly push back against the view that a strongly human rights respected AI that is ultimately targeted to human thriving is going to be the most trustworthy AI trusted by consumers by citizens by everybody in our societies. And I'd be firmly of the view that in the long game. It's the trustworthy AI that will ultimately win out. So friends, I before you put your questions or your observations I'd much prefer observations are easier. But before you do that. I'm sure some of you here are online think that I'm naive and unrealistic, and I accept that I could. I could indeed look like that and sound like that but I feel that I have no choice. The game is just too serious. AI is profoundly impactful for human thriving. And there is no better shared pathway for to respect humanity than is human rights are to put it let me finish up by putting it another way. I was in Brussels not so long ago and I did a bit of free time and I went to the Museum of Fine Arts. I wanted to revisit one of my favorite pictures in the world. It's Peter Broigles, Daedalus and Icarus. Some of you might know it. It's, it's, it's a tiny picture it's not much bigger than the one up there of the Bank of Ireland. And it, it, it's famous for the fact that it's full of shepherds shepherding their sheep. Fisher pulling the fish out of the sea. People plowing the land. And there's a turquoise sea in the background. And it's only after you study it for ages that you see these two little legs dangling where somebody has just plunged into the water. The first time I saw it I laughed out loud. I thought it was so funny. You know, it took ages for me to find Icarus poor Icarus landed in the sea and and and and and you know would drown. But I thought of it for today's purpose, for a different reason, which is why Icarus fell into the sea. You all know the, the story Daedalus his father made him wings, bound the feathers together with wax and sent them off up. And in a, in, in great acts of hubris and self confidence, Icarus flew too close to the sun, the wax melted, and he plunged into the sea and drowned. And it occurs to me, the why did Icarus, why did this all happen to Icarus it wasn't just hubris, but he had the wrong flight plan. And, and, and that brings me back to AI, because I would have suggested if we if we substitute for Icarus AI, and we give it the flight plan of human rights. And then I believe that AI indeed can soar up to the sun and can bring all of us with it. Thank you.