 Rydw i fel angen, everybody, you're very welcome here to the O dancer from the O'rilo institute. We're delighted to see so many of you here today. My name is Joyce O'Connor and I chair the Digital Futures group of the Institute of International and European Affairs and we're very pleased to be associated with today's event. As you know, today is an amazing week, I think, for AI, for Dublin and for Ireland. The National Standards Authority of Ireland, in collaboration with the Adaptory Centre here in Dublin and Trinity, is hosting the third plenary meeting of, and it's a long title, I'll just give it to you once, the ISO IEC JTC1 SC42, Artificial Intelligence, so we're going to keep it to ISO 42 after that. But this meeting, it's the third plenary meeting, the others have been held in China and in America, so it's tremendous to have this event here, and congratulations to the NSAI and to the Adaptory Centre for making this happen because today and during the week, we have a lot of experts, not only our own Irish experts but international AI experts, which is great tribute and also offers a kind of a showcase for what's happening in Ireland and in Dublin and the regions. I'd like particularly to congratulate the NSAI, is Egerlyn Larkin down there, Mary White, and Terry Landers. Now Terry has several hats, I know, besides being involved with Microsoft, but he's been very active in the chair of the NSAI Committee on ICT Standards, but also he's also chair of this standards group, international standards group, so and to Dave Lewis, Professor Dave Lewis, for asking us to get involved in this event, we're really very pleased and I'd like to thank our own team here, we're obviously led by Jill, but own Larkin and Finola who have worked to make this possible. It was very interesting yesterday morning, I just better get this right, the technology Ireland ICT skills net had a really very interesting breakfast seminar on the title of artificial intelligence and innovation and it was Professor Lewis spoke with the chair of ISO 42 Vale Diep and they spoke on that issue, but in fact all of the other speakers and there was representatives of Irish business as well, really interesting case studies what's happening with artificial intelligence, but every one of the speakers one way or another spoke about ethics and trustworthiness, so in many ways this is a really good, I'm sure it's done with the planning very timely, it's timely as well because the EU expert group, high-level expert group has just published its report on AI artificial intelligence and trustworthiness and ethics guidelines and that's been co-chaired by Professor Barry O'Sullivan who's from Caw, so it's very good to see that, but I know Minister Breen and representatives here who are from the department have been actively working with Europe in this area and we'd be looking at seeing an AI strategy shortly, the minister has talked about that, so that's very very welcome. I think today we're particularly lucky to have Peter Brown here, Peter has a stellar career and very distinguished, I'm not going to give you his full CV otherwise we'll probably hear from the next half an hour or so, but he has recently been appointed editor of new ISO standard looking at the governance implications of the use of artificial intelligence and that's what he's doing, but besides that he's worked in public service in the European Parliament, he's worked in the private sector all over the world, several fortune 100 companies working on and international organisations on technology strategy, he's also worked on ISO standardisation work on information technologies as president of the identity ecosystem steering group and that was really important because it was established to deliver President Obama's national strategy for trusted identities in cyberspace and I know a number of people here particularly interested in that, he served as president of the board and director of the global open standards consortium Oasis, so he's very familiar with this area and a lot of practical experience, but today he's going to be speaking on ethics and government's issues in artificial intelligence, but I must say he has an intriguing title and it says an AI walks into a barn, Peter, we wait the rest of the story, thank you very much. Thanks for the introduction and thanks for the invitation, it's a pleasure to be here. I should preface my comments with little confession which was that I arrived in Dublin this evening and I wasn't feeling very well for the first couple of days and I think that experience of being sick in a hotel room, a foreign country, it's one of those miserable experiences you can have and I was contrasting in my sitting there in my misery, the experience of 30 years ago when you're stuck in a foreign country and today because today at least you can get your cell phone out, you can find out where the nearest pharmacy is, you can chat with people at home, you can talk to your doctor, you can talk to a robot doctor if you're lucky and it's just fascinating that you know this this little device in your hand is able to communicate with that sort of broader world and give you lots of feedback and so yeah it's never fun being ill when you're abroad, I think the only thing worse than that is the inline advert you get while you're going through your news feed saying Peter, cheapest rates on life insurance in Dublin island, which then raises a question about yeah well where did that come from and I'm just going to return to that uh because I think there's an issue there about the the sort of discussion we're going to have um no apologies for the fact that the slides I'm not going to do death by power point just so you're clear but I wanted to set the scene a little bit with just a few I hope sort of educational sort of course 101 slides just touching the top level of my sort of understanding of where the issues of governance and by that I mean the top level organisational responsibility for what an organisation strategy business mission vision values ethics and everything is basically as one Lloyd put it to me when I was on a board of governors you're the guy that goes to jail if things go wrong anybody else it's optional but you're the one that's that's on on the line so your job as a governor or as a member of a board is to ask questions is to probe to find to understand how things are but at a level which has an impact on your organisation so I think given the fact that AI is so deeply embedded in a broader areas I wanted to do a sort of quick quick run through um some the sort of digital world I mean this is the this is the classic model which goes back many decades you know you've got what goes on in the real world you capture the intention of something like measuring the the temperature in a room you capture it as data the data goes in you've got lots of technology processing that data based on models and processes that you have about real of the real world some data comes out the outcome is the thermostat or the will trigger um putting the heating on or putting the air con on it's a very simplistic model but most IT systems are modeled on that approach looking at the broadish of governance of data even that simple model throws up questions you know so you're intending to capture the temperature in a room so is your model correct is you're doing things the right way you're capturing enough information is the model the data that's coming in where's it coming from what authority who's who owns it what's the value of it what's the quality topology security so all the way around you've got questions which are thrown up about the suitability of the data the suitability of your processes the technology using the reliability of what comes out right down to the final bottom line impact which is most private sector bodies are interested in is was all the effort worth it you know do we actually gain something from modeling and processing this this approach so we've got that already sort of high level consideration of governance of data where we are today's more complex we've got the real world still hopefully thankfully holding on to it desperately our interaction between the real world and the data coming into the systems and now being intermediated by a whole host of interfaces so I mentioned you know being on my my cell phone cell phone's got a microphone it's got a camera it's got a keyboard it's got movement sensors accelerometer it can pick up lots of data from the real world even if I don't want it to and even when I'm not even aware of it so there's even from a simple cell phone learn more complex systems we're getting lots of data into a system level view from extracted from experience of the real world through those sort of interfaces and the the technologies that are developing and this is where we're starting to look more now at AI that should be plural with AI technologies or we have discussions this week about what the term should be we're talking more about AI systems I think is the correct politically correct term of the week where the systems are taking the data they're learning what to do with it based on models and based on training data and this explosion and we think it's complex today think on one projection which was that by 2030 something like 85% of all manufactured goods will contain connectable components and contain internet of thing devices we already have it with washing machines bridges you know smart devices in the home and in industry but in the future given the complexity of the supply chain of many goods a lot of internet of things IoT enabled devices will be embedded in supply chain and more complex with by default whether they're needed or not because it's just simpler to do it that way so in a situation where more and more interfaces are being provided to the real world and the growth of AI hopefully is is fostering new and better ways to process these newer sources of data much more richer sources of data and provide results that can be used in the real world which then throw up lots of new issues as well both in terms of how what happens with the data coming out going through again various interfaces with the real world to outcomes providing more value to hopefully to society that again throws up complex governance issues many of these appeared on the first slide so they're not many of the governance issues around AI technologies and IoT are not new and I think many boards today and certainly boards that I've talked to feel a little intimidated that hopefully don't know anything about AI we don't know anything about these technologies it's it's too much for us to cope with but if you try and break it down to essentials about what a governance mechanism needs what a board needs a lot of the issues are the same they may have new implications and new consequences but they a lot of them are the same and this is where the AI walks into a bar so one of one of the fascinating things about AI at least in one of the areas of AI you understand it's a it's a catch-all phrase for a whole host of technologies most common being you know the so-called machine learning technologies where you've got a set of algorithms running receiving test data the whole time understanding testing against that test data in the real world and making predictions and hopefully making better and more accurate predictions as the time goes by based on the test data on the real data it comes in so you know the AI walks into a bar hang on it gives me a chance to go up to the bartender well you'll be having a good evening to you AI looks around what's everyone else having so yeah AI in machine learning may be very good in terms of determining making a projection based on existing or past behavior the risk from a governance perspective is how good it is at actually predicting future events and when you take away the magic source and the pixie dust and the marketing from a lot of the promotions made by companies promoting AI and particularly the machine learning I think it's important that organizations address the issue of well okay the models show you've had a hundred percent success rate on test data that you've fed it and guess what you've said it's this is what the result should be so the result should be a hundred percent so you've got to be careful about how test test data is mixed with the real world and how you supervise now use the word advisedly you supervise that sort of process of acquiring new data and adapting algorithms that the machine learning system will do and the the issue there is from a governance perspective where do you where do you draw the line between some high level and light touch direction that you're giving to your either your own technology service for developing AI or to your to your services that are going to buy in through your sort of corporate purchasing buying capacity for use of AI these are the sort of questions at a high level that a governance body needs to think about so they don't need to know the details of how the algorithms work know I mean got over suddenly many of the AI people themselves are always clear about that and that's part of the the nature and the scariness of the beast but to the job for a governing body is no different than it's ever has to be which is probing asking the right questions and ensuring well ensuring ultimately stay out of jail but to identify the types of questions that need to be asked in terms of being able to to address your organisation's needs and when it comes to organisational to governance of AI which has been a term thrown around for a couple of years the conclusion we had in our own working group which is developing the standard on this subject was we should be careful about the terms we use I mean in the standards world you know we can and Terry can vouch for this you can spend a morning discussing the implications of the word can or use or decision because you know in some standards that could mean that could be like a death issues if it's a critical piece a standard that's been used in a piece of critical infrastructure so words are important and terminology is important but the reason I mentioned that is because in the area of governance you need to understand enough of the the concepts around the technology to be able to understand the implications for you as an organisation again some of the the questions here that come up around AI more specifically you know like my AI walking into the bar so well you know he's obviously he obviously knows to ask the right question and to to get some feedback or whatever but it's not really an indicator of what he might want what he might want to drink so the question about the suitability the algorithms the models checks and balances and then when you at the other end of the the other end of the the scale you've got all the issues about well okay you've done all this processing you've used the AI to develop your system what's it going to do with it how reliable is the data can you understand it can you understand the proposed decision is it understandable in itself is it suitable to what you want to do is it reliable um another issue obviously boards is risk mitigation of risk liability failover evaluation of of any critical situation so again I don't think any of these issues have changed the scaling implications have and what I want to look at in the last part of this talk is a couple of the issues that come up there is a result of that um risk now it's everywhere take my little phone call or my my few minutes on my phone and getting weird out um but the models of the apps that are interacting with me are they capturing my real intention if I say I'm dying in a you know pathetic sort of man flu type response to my wife um yeah okay I'm not but it's enough of a trigger to monetize an advert somewhere so as I discovered my peril so is the data in there could be problems there about the quality when the data is actually fed into the technology there's there's there's all of these the exclamation marks are sort of indicating pain points in that whole cycle of how technology uses data from the real world and pushes that to actual real world outcomes and the biggest challenge I think for AI in my opinion and it's this is where I'm going to hopefully scare you a little bit is the combination of artificial intelligence with cybersecurity now cybersecurity it's a nice one of those nice marketing buzzwords it's basically security of information systems that happen to be connected to in the real world by the internet which is basically pretty much everything apart from interestingly a few um I've heard uh recent stories I don't know if they're uh uh reliable or not about a number of power stations they're talking about going back to manual failover switches physical switches to take things completely offline physically to in the case of attack because they cannot afford a low risk scenario they've got to have a no risk uh fallback position because you know okay the it might be a very zero zero point one percent risk that something's that the nuclear reactor's going to go critical but it's not a risk you want to take so you want to be able to take it offline but aside those sort of small real world analog um situations we're in an uh increasingly complex interconnected world and cybersecurity risks and vulnerabilities appear in every single of those touch points there can be interference with the data that's coming in you can intercept sensors and tell the the the building maintenance system that the room is freezing cold turn up all the turn of all the H HVAC systems to a point where things catch fire you can interpret these are all real cases just you can interrupt a sensor on a internet connected printer to such a way that you can force it to over print at a higher speed at a higher intensity to such an extent that the printer will actually catch fire and normally next to these network printers you have a great big stock of nice flammable material called packs of A4 paper and there have been studies showing that you in a office block where you've got maybe 20 network printers over 10 well i leave you to join the dots you know and that's just and that's just a cyber attack on interpreting the data coming in right at the beginning here it's not even the rest of the system is supposed to be working and functioning as per specs um right through to interrupting the data that's actually being what happens if you start feeding fake news fake data into the air system how do you um how do you deal with that what if your models and your training data have been compromised it's 49 hours indeed there we go i'm done that's very good too that's really hard to show how much you say i owe you because i won that's big you've only half it's me haven't even got to the bar yet but all the way through to you know intercepting and manipulating data right at right at the end so we have this i don't want to characterize it in military terms as a war as a battle between the good and the evil but there is clearly um work underway by good isn't bad is to use AI for good to try to identify vulnerabilities in systems to identify where attacks are underway but equally there are there are online services um recently we've talked about it more in the the Q&A maybe but um with Europol there was a recent presentation of a cybersecurity tact managed by AI as a service available online just pave our blockchain to this address and we will go in and we will go and silently delete modify and corrupt data on a public facing website so when people visit it even the the site or the site managers aren't even aware that pages have been changed and we're just coming up to european elections number of national elections so just put out there that you know it's a it's a scary world war right the scarier thing for me if you just um is there is a thing called and a terry you can probably help me if i get it wrong here um a i've forgotten the name of the damn thing now something something something um it'll come to me there's a website which i think is called this is not a real person dot com you might want to have a look at it too what it is it's a site which is a sort of site a sort of public facing spin-off from a research project which basically is designed to use AI to test the validity of current models and current AI systems so it's um you have a set of training data you have a system that's right and then you have this um additional system which will provide fake or real data in the in the case of this particular website that the clue's there in the name it generates artificial images of people who do not actually exist and you look at the pictures and you think that's a person no it's a bunch of pixels generated in real time when you access the site of something that looks like a person and the reason it's done is not to deceive us it's done as a test to try and deceive the system which is the AI system trying to identify between fake and real the fact that that even exists sort of blew my mind and you if you look at the if you look at the website and look at the images you see wow this some of them you think that's really amazing some go off completely bizarre with weird artifacts appearing half of the side of the spectacle or things like that but the scary thing is in the research is the AI is very good at generating artificial images but very poor at distinguishing or less less good to be fair less good than at distinguishing fake from real images so we're in a situation where potentially lots of fake stuff is being generated where humans are better at spotting the real from the fake but because of the scale and the volume of stuff that can be generated it's beyond human scale to sit back and filter through the stuff and I think that is the sort of for me governance question about AI which really what's it keeps me awake at night but I think it did Monday night but then I wasn't really sleeping very much then anyway but it's those sort of issues and I gave that one as a I'll try and remember the name of it um the it'll actually be more than a word but I think you've got that um the issue comes up inevitably when we talk about governance of AI and responsibilities and organizations for you using AI so what about you know the AI killer robots what happens if we start embedding AI into military systems and they go autonomous and they start doing bad things what happens if the the robots go crazy in the in the factory and and start damaging property and people and and the business so there are risks about you know should AI be given that degree of autonomy to be able to do these things it's actually been a core issue in the we're in the very early stages obviously in our work but in the early stages of our discussion on governance implications of AI the issue of ethics in is there full square in the in the middle of the discussion and I think it's a fair I want to try and give a sort of fair summary of the debate as it is at the moment because I think the jury is out about having AI systems be ethical I have my own view on that and I'm happy to share it but I'll try and be I'll try and be a reasonable editor for the moment um for me there are three issues the first is what I call the danger of anthropomorphism by which I mean if we and technology marketing departments researchers newspapers everyone is guilty is everybody else in every time we talk about AI you've got a picture of a robot robot with a human face little smiley nice happy face on the screen or whatever we're humanizing a piece of technology a combination of silicon code and electricity um I understand the desire for anthropomorphizing and we were actually talking just before and about you know the experiments and and um trial runs of using robots in support of medical interventions and how many patients are very happy with this idea because they feel you know that the robot system has some precision and accuracy and that he's not going to fall asleep on the job or anything else so on one side there's a certain comfort zone there but the risk in the broader area of AI I think that's okay in the area of robotics but in the broader area of AI to give a human concept behind what is essentially running code and code that's updating and learning and and processing the whole time poses dangers from a governance point of view because you start then rationalizing decisions that you make based on the idea that this thing has which is the second word has agency that it has some form of autonomy and that it should be treated as capable of making third word decisions so we've got anthropomorphizing a system giving it a sense of agency and that through that agency it's able to make decisions and the challenge we have in our group is on each of those three questions saying do we accept that those are inevitable in any future conception of what AI is or do we want to draw a line somewhere and say thus far and no further because there will come a point in discussions at governance levels about oh well we can delegate that to the AI the AI can do the the hiring for this post the and it's already starting um the AI can manage the distribution of tasks in this call centre it's already happening up to we're going to get rid of the whole of middle management because our organisation is now so bureaucratic and rules-based no manager has any margin of manoeuvre perfect for being taken over by rules-based AI systems so let's get rid of the whole of middle management there's some hints here about jobs you should be watching out for avoiding in the future my point is that if you go for a very strict rules-based approach to human activity those are the areas where automation is possible now automation for me is not the same as autonomy any AI system to have a complex if they are rules-based are based on processing and delivering results based on a predefined set of rules in that sense in my view there is no decision making involved because there's no judgment and judgment is essentially a human characteristic in my view therefore the issue of decision-making AI systems if it's purely rules-based is not a it's not an issue and for me if you're on a governing board and you're faced with such a thing you say well if that's what we want you want a rules-based system that's going to replace very clear business processes or pieces of work which don't require any human agency or any judgment then those are places where we can use AI however a lot of AI systems are not rules based they're based on various conceptual models of different approaches algorithmic statistical analysis and others way beyond my pay grade but that highlight that the issues we're talking about are both human impact in terms of what could be beneficial for an organization but also impact of how those technologies are used and I think where we are on the issue of ethics and this is from my sort of rounding up is that to talk about the ethics of an AI system is probably the wrong focus. Organizations and individual humans have ethics and ethical behavior or companies have codes of conduct as a civil servant have a staff code of conduct to follow staff regulations we as individuals follow our own individual moral compass in deciding what's right what's wrong and how we behave we all have ethics and ethical behavior you know you'd say the the mafia has a code of ethics you know it's not necessarily one we share although we nice have a standard for you know godfather part one part two part three um but it's a code of ethics and ethics must remember doesn't actually mean being good ethics is making a judgment about something that you're going to do whether you think it's good or not so it's very difficult objectively to say yeah we want ethical AI and that means we're going to have AI that's going to do good things so we think that whole debate about the ethics of AI is a bit of a distraction what we should be focusing on is what are the implications for an organization of its own ethical framework and its own mission vision values of the introduction of AI based technologies or systems in the organization's work because based on the sort of things the threats the opportunities the sort of things I've shown there are obviously masses of implications of where AI is brought in and this throws up questions that governing bodies need to address and I think I'll leave it at that for now.