 Good morning everybody, you're very welcome again back to the institute, I'm delighted to see so many of you here this morning. I think we've decided is the first day of winter since the Roséynd Stones just outside. Just before I begin you know the housekeeping rules if you can turn off your mobile the presentation is on the record but the Q&A is Chatham House Rules. runtio unrhyw ond yw'r strategiaeth eu cynnwys gweld ei hun ar gyfer mewn wneud. Pwy mynd i pan hwyl eisai gyda'r gyflawn o drafod lawr neu ymgyrch. Dyma i'r strategiaeth ein gynllun ych chi'n nosed i'r strategiaeth, fel Cynllun, Unedig Ymgyrch, i'r rhwyng raining ac ymgyrch. Dyma hwnna roedd ei bod yn ffocusing fel yr strategiaeth, I'm sure that our strategy will come up, Barry, and in last April you may remember that the commission established a blueprint for a three-pronged approach to artificial intelligence, the first an increase in public and private investment, secondly the preparation for socio-economic change, and thirdly, and I think really importantly an agreement on the appropriate ethical and legal framework. They set up a high-level expert group, a working group, and this group is charged with supporting and elaborating this strategy, and Professor Barry O'Sullivan is the vice-chair of the group, but for the most part to date EU's AI strategy remains unarticulated beyond these commitments, and it forms largely on this high-level group really to establish this policy and to look at these broader commitments and the completion of coordinated action plan, and Barry is going to tell us how they're getting on with that, and we're really very lucky to have Professor O'Sullivan here this morning with us. Not that it was hard for him to get here, but he is so busy. I think we were saying lately he barely sleeps between all his commitments, but he is, as you know, a distinguished academic and researcher. He's the founder, director of the SFI Funded Insights Centre for Data Analytics based in UCC, a member of the Royal Irish Academy. I'm not going to go through all his accomplishments and awards, but needless to say, just to say to you that he has received national, which is always good Irish awards, but also international awards for his work. He's brought in enormous amount of money to UCC in terms of his research, and I'll just give you one example. He is an advisor to the computational sustainability network, a network of universities in the USA led by Cornell, Princeton, Stanford, Georgia, Tech and many others. So we're really lucky to have such a distinguished person here today, and I know Barry, your presentation will give us an overview of AI, but also perhaps position us where we are on the European strategy. So thank you very much for coming. We look forward to your presentation. Thanks. Well, thank you Joyce. No pressure then. So thanks everybody for coming this morning. So yes, I don't get to sleep very much, and had you phoned me at four o'clock this morning, I would still have been working, which is crazy. I shouldn't have told him about that phone number. I shouldn't have told him about that phone number. So I suppose I should start off by saying that there's a lot of interesting things going on in AI at the moment from a policy point of view. So the European Commission have published a variety of documents that I mentioned during the talk. I suppose the highly flexible group is tasked with three things. It's tasked with one, coordinating a thing called the AI Alliance. So if you're not a member of the AI Alliance, you can join. So Google it. I'll talk about it during the talk, and you can basically articulate your views on how European AI strategy should evolve. There are two deliverables. One deliverable is a set of ethical guidelines for artificial intelligence in Europe. So how should we basically help companies to demonstrate that they have considered a variety of ethical issues in developing their products. The European strategy on AI is human-centered. So Europe sees itself as developing a human-centered approach to AI, and I'll talk a little bit about what that means as I go through. The other piece that we're involved in is developing a set of recommendations for investment and policy for AI in Europe. So the European Commission believes that by 2020, well it wishes that by 2020, there will be basically investment of 20 billion per annum, which is a crazy amount of money, invested in artificial intelligence in Europe from 2020. And about one-fifth of that will come from the European Commission. The remainder will come from Member States and the European industry. So for example, Science Foundation Ireland are here today. So the Commission will count, for example, their investments in centres like ADAPT and INSAIS and LERO and so on, and any investments that they might make going forward in CRTs and so on as part of that 20 billion. But obviously that gets us so far, and what the Commission would like the Heiler of Expert Group to do is create a set of guidelines and policies that incentivise European industry to invest even more. So I'm going to talk about this as a tale of many AI strategies. And I suppose there's two parts, as Joy says, there's a brief potted history on AI just to give you a sense of, I suppose my view of where AI has come from and where it's going, what the issues are, what sets up then what we're concerned about from an AI point of view in terms of ethics and so on. In December, I suppose the next big event that's going to happen on the AI calendar is on December 5th. The European Commission will publish a co-ordinated plan on artificial intelligence, which I've seen a confidential version of. And this essentially becomes, I suppose this becomes the European strategies for member states if they already haven't got a strategy in AI. Now of course it's very broad and accommodating, but a number of states, as Joy says, have already published their own strategies. And I can go into great length as to what is in those things. But I suppose when you read them what you find is that they all say pretty much the same thing. Everybody has been busy doing exactly the same piece of work all over Europe, which is what Europe is often very famous for. So if you want to follow me on Twitter, I'm on this tweet machine thing. So what's AI? So I suppose if you read, we're in Ireland, so I suppose we should make a claim to being one of the fathers of our mothers or whatever of AI. So George Bould worked on the laws of thoughts in UCC, then Queens College Cork. This is his personal copy, which you'll find in the UCC library. I encourage you to go and have a look at the papers. So in that book he attempted to, I suppose, develop the logical basis for reasoning. And it's widely regarded as one of the big pieces of work in artificial intelligence. Of course there are pieces of work that predate that. I suppose the people often think of the Turing test as the start of AI. So Alan Turing in 1950 came up with this concept of a Turing test, which is basically a thought experiment to some extent. And the AI community don't really consider it as a serious test for whether something is demonstrating AI or not. But the only example I have of it being passed is actually on Tinder. So there's an artist, a friend of mine in the UK who's developed Lady Shatley's Tinder bot, which is a chat bot based on conversations on dialogue from Lady Shatley's lover. And it basically confuses about 70% of men believe that it's actually a real person, which means that it has passed the Turing test from men. Women are only fooled by about, I think, one in 20 women are fooled only for a brief moment that this actually might be something real. But men think it's a really flirtatious individual. So we're safe on that score. So AI is a term, of course, was developed in 1950 by these fellows. This is the Dartmouth workshop in AI that took place in Massachusetts. And they believe that essentially AI could be solved as a 12-week project and were at least 12 weeks away from AI 70 years later. And I suppose there's a number of reasons for that. One thing that they didn't understand is I suppose the concept of computational intractability, which I won't get into because it's a very technical area, but basically the concept that some problems were essentially intractable, that you could not find solutions problems in a reasonable amount of time. And by reasonable I mean in time that's sub-exponential. And they did not know that such problems existed. And in a sense AI is dugged by these sorts of problems. So this basically held up and still holds up progress in AI. And some computer scientists call this the P versus MP problem, which if somebody really wants to know about it, I'll bore you with great length about it, but I'll leave it at that. If you look at textbooks you'll find that AI is very siloed. So people tend to work in particular areas. So if you were to basically go to any university in Ireland and tip somebody in the shoulder and say, well, are you working in AI? They would say yes. And you would say, well, what area? And they would give you one of these areas probably. So the field is very, very siloed. And that's to be expected. So people focus on very specific aspects. Very few people work on the big problems in AI. So actually building a machine that replicates human thought. So this idea of strong AI, things like Stephen Hawkins and Stephen Hawkins and others, and others would say are existential threats to humanity. Nobody's essentially working on these things, which is comforting because, as Hawkins claimed, that AI was the last big technology that human beings would ever develop. Fortunately, nobody's actually working on it. So we have nothing to worry about. But the big thing that's happened in the last 10, 15 years is the advent of data. And so three things have happened in AI. One is the availability of massive amounts of data, which is why we have centres like ADAPT and INSIGHT and others in Ireland, for example, exploiting different aspects of it. So you often hear statistics like in the last 10, 15 years. In the last one year, more data has been created than all of the history of humanity for the previous, you know, since time immemorial. And that's probably true, but as I always say, sturgeon's law applies, which is that 95% of everything is crap. So most of the data we have isn't actually particularly useful or whatever, but there is vast amounts of it. Developments in, obviously, machines have become very, very fast. So if you're a software company and you've done nothing for the last 10 years, then your software product is now a million times faster than it was in 2005, because the software you're using is about a thousand times faster than it was in 2005, and the hardware itself is about a thousand times faster. So just by drinking coffee and staring at your belly button, your technology has become a million times faster. And of course, algorithms that were developed in the 60s finally now have an opportunity to work. But I suppose all of this work is basically around a perception problem. So the whole deep learning revolution that we hear about is really focused on perception. And I suppose while I'm an AI researcher, I'm a deep learning cynic. And basically I own a very cute Labrador and horability is basically equivalent to essentially what deep learning systems do. I can train her to recognise a piece of meat from a carrot and she can do that extremely accurately. But you can sit down with her and you can ask her, well, why do you think that's meat and why do you think that's carrot? And she's not able to tell you. And guess what? Deep learning systems aren't able to tell you either. So we should be very, very careful about how we consider AI and the progress that has been made because essentially while massive breakthroughs have been made in terms of applications, we know nothing more about artificial intelligence than we did 50 years ago in fact. So it's a little bit of a misnomer. So we need to be very, very careful about these sorts of things. So I suppose artificial intelligence is not machine learning, which is what you hear, but it's a subfield. And you often hear about deep learning, which is a subfield of that again. I suppose just to close the loop in some sense, George Boole is the father of logical AI and Jeff Hinton is the father of deep learning. And there's a family resemblance I think you'll admit because George Boole in fact is his great-great-grandfather. So if you're ever in Canada, you can claim to come from the country where Jeff Hinton's great-great-grandfather comes from. And don't make a big deal about that to Jeff because Jeff believes that Boole got it completely wrong. That AI is essentially pattern matching and it is not in any way logical. And of course I think in recent times he's actually admitted that that's maybe not true. So I suppose just in terms of progress, in 1996 you might remember Deep Blue, this big IBM chess playing program. It was a search method. In 2007 people moved on. So a lot of progress in AI has been around games in fact. So in 2007 people started looking at Texas Holdem and games like this because they had an interesting dimension to them which was around bluffing. So could you fool a human being into thinking? So could an AI system actually learn how to be dishonest in some sense and be strategic around convincing somebody that it knew something that it was worth claiming to know something that it didn't really have or that it didn't really know. So in 2007 AI systems were pretty much on par with human beings. In 2017 these systems completely were far superior to human beings at playing Texas Holdem. And I think this is Thomas Anton from Carnegie Mellon is the father of Libratus, which is a system for Texas Holdem playing. And I suppose what Thomas will tell you if you ask him, but you have to know the question to ask him, the electricity required to power Libratus for three weeks was about 50 million dollars. So this played against four kids basically, four 19, 20, 21 year olds, whose noodle in their skull is about seven watts. And if you try to read a book under a seven watt light you don't get very far. But this thing was basically consuming about 50 million dollars worth of power, which is more power than human brain consumes in about 5,000 lifetimes. So this machine was, it does have super human capabilities, which it should have because it has super human power to do so. You heard of Jeopardy, which is a game that fell to AI as the expression is. But I think I would caution about that as well because obviously we don't have the power of the internet on our heads at any moment in time. As my nine year old son says when he asks me a question and I give him an answer he says, well dad do you know that or do you just Google know it? And of course Jeopardy was basically something that sort of Google knows things. It doesn't really know things at all. So AlphaGo fell to AI in 2016 and in fact I suppose when you look at the details, which is a tremendous technical achievement, there's no doubt about it. These are fantastic technical achievements. But basically AlphaGo in order to play against this rather sad looking person here, Lisa Dahl, it played more AlphaGo games, it played every AlphaGo game that's ever been played in history and more AlphaGo than would be, as much AlphaGo as you could possibly play in about 6,000 lifetimes if you did nothing other than play AlphaGo. So the damn thing should be pretty good playing AlphaGo. And while it beat him, it didn't beat him as convincingly as you might think given all of those resources. So we need to look very cautiously at AI in terms of the progress. You might sort of feel it odd that an AI person is saying that but one of the problems is that if we don't check the hype then as scientists we're not doing the right thing but also there might be another AI winter as a consequence of world promising. Of course this has moved into other areas like biomedical science for example. You'll often hear things like radiology is now dominated by AI, cancer diagnosis is dominated by AI. That's technically true in some sense. Self-driving cars, we've seen the evidence of self-driving cars. Lots of issues come up in self-driving cars like questions like liability. So if a car kills you, who is liable for that? And nobody knows. It's basically the short version. Nobody knows who you get to sue. Nobody knows who it is at fault. And self-driving cars also highlight an interesting question which I was talking to some medical people yesterday I was speaking at the health management conference. If a self-driving car kills you and though it's much safer a driving than a human being we seem to feel very hard done by that. So interestingly we hold technology to a higher level of account than we do human beings. At this point it is true that self-driving cars are statistically safer a driving than human beings but yet we still want human beings to drive cars which is kind of curious. So it's all over the place. AI you can't help but encounter it. I was involved in a company called Four Impacts. If you ever walk to the Islex Centre we've developed so those big advertising pods stand close to one of them. You'll see four impacts in the bottom and you'll see a camera at the top and the camera is predicting your gender, your age and your race with extraordinary high accuracy. It also knows whether you like the thing that you see in front of you and if we were allowed to do so all of the advertising that is displayed in the Islex Centre would be personalised to you. This mannequin in the centre is an icy mannequin. Her eyes are cameras and using gaze detection she knows your gender, your race, your age with very very high accuracy and she also knows whether you like the thing that you're looking at on the mannequin. So for example you like the shoes, you like the trousers, you like the shirt but you only buy the shirt and so you go to the cash register and you take the shirt with you and so the person gets an opportunity to upsell and you carry the phone in your pocket so they know who you are. Raising lots and lots of ethical issues, right? So if you're interested in AI I suppose really seriously in terms of where it's going I'd encourage you to look at the 100 year AI study at Stanford and it's a very long report but as with everything you can summarise in two slides. So there are trends which are technical things so the technical trends in AI so vision, reinforcement learning, deep learning which you hear about all the time. There's also what they're looking at and you'll also find that in different domains that there are different focus areas so for example in health people are interested in elder care and all these sorts of things. Moving on I suppose if you look at the current big opportunities in AI you'll come across sustainable development goals there's a unit within the office of the director general of the UN Global Pulse which looks at AI solutions and big data solutions for achieving the sustainable development goals and you might say well what's that about so how can you deal with no hunger for example when it comes using AI and what the UN Global Pulse do for example is they publish examples of projects that they're working on to achieve that so for example on zero hunger crowdsourcing or tracking food prices this is online to help monitor food security in near real time so they're very very specific things but show the power of data in some sense and of course you'll hear lots of prizes so in our conferences IBM has an X prize which I'm a judge on which is trying to you hear this expression AI for good which it's hard to object to it's like quality nobody can object to quality but nobody really knows what it is so good for who is the question I suppose one of the big challenges is bias in AI so this is another issue so I suppose we've made massive technological advances there are serious issues one of the serious issues is around bias and so people normally think well if I think really carefully I can sort of eliminate my biases and you can't of course there are lots of psychological experiments that ask you to complete a form where you tick all the boxes saying how unbiased you are and there's a subsequent test that sort of test that bias and it demonstrates that we're massively biased there's some fantastic books written on this sort of stuff there's Kathy O'Neill's book on weapons of mass destruction there are other things like artificial unintelligence there are many many books now highlighting examples where bias in algorithms is causing all sorts of problems and bias is all sorts of things like some examples of algorithmic bias you hear about all the time so this is the right scientific experiment around in my office I typed in cute babies into Google and these are indeed very cute babies and of course if the slide was maybe three times longer you would see many more babies exactly like this but of course they're all pasty white and pink and that's because Google thinks that I will find babies that are pasty pink more cute than babies who are not pasty pink which of course is not true and you'll find other things like apart from algorithmic bias you'll find things like interaction bias so AI systems that learn by observing humans and usually the data sets for these are observed online and of course how we behave online is very different to how we behave in the real world and so Tay was basically a chat bot that learned by observing how people interacted so you can imagine if your data set on how human beings interact is how people communicate with Donald Trump on Twitter it isn't representative of how people interact in daily life so it's a very biased data set you often hear of personalisation AI is often used for personalisation and this is a bias because obviously we get to see more things that we've seen before and lots of recommender systems people when you talk to them are oblivious to this that they are creating technology that creates bias there are other things that are very very difficult most of these things in some sense you could sort of imagine technological solutions to you can imagine that just be careful about it but there are things you can't be careful about and one thing is language like if you look at how languages have evolved particularly languages that have gender based pronouns then you find that there's lots of in built bias that we simply can't get rid of because it's baked into the language and so if you send an AI system off to read the content of the national library it's going to be biased so we can't get rid of that the other thing that AI system can't do is explain this isn't generally true but it's true of most of the cutting edge machine learning systems so this is a funny I love New Yorker cartoons it's worth buying a New Yorker just for the cartoons actually so I made this funny remark about my labrador but I mean that seriously there are deep learning systems so deep learning systems cannot explain to you in any way what it is that they understand about a problem and so in that sense there are no different from my labrador so my labrador can't tell you why she thinks one thing is a carrot and one thing is meat and that is when you think of deploying AI systems for example in diagnosing cancer you can imagine an interaction where a patient comes in sits down and the doctor says well you have stage 4 or whatever kind of cancer and the patient is likely to say well why do you think that and AI systems can't explain in fact not only that but at this health conference yesterday there was a number of people talking about using deep learning for various tasks and I was making the point that actually you can force a machine learning system to misclassify examples in all sorts of curious ways and there's an area called the development of adversarial examples where people develop examples specifically to fool deep learning systems and so in ways that you would not expect so these are just images from a paper and archive and they're all examples of images that have been modified in a way to fool a learning system into thinking that there's something else so for example these things think that they're not stop signs these are interpreted as 45 mile an hour speed limits these obviously this is not an African grey bird that's not an Indian elephant and that's not an elephant either but there are ways of modifying examples in ways that ensure that an AI system will misclassify them and the reason we're interested in this is because it allows us to test robustness of AI systems so there's all sorts of issues around ethical issues around job loss I don't believe that there's going to be massive job loss as a consequence of AI but that will change and the perfect example of technology has always done that if you were a stagecoach driver in the early 1900s your job was on the line your job was going to disappear but the extent to which AI is going to have a catastrophic impact on jobs I don't believe it's going to happen so just moving along quickly what we're concerned about now is the development of killer robots as you hear about them so lethal autonomous weapons systems and by and large the consensus in the artificial intelligence community is that we should not be developing AI technology that's going to be deployed in weapons systems and one of the reasons for this is that we don't want nobody wants to be killed by an AI of course nobody wants to be killed full stop but I suppose the real issue is that we don't want to give responsibility and authority to something that is not human for making decisions to whether somebody should be killed or not in fact there is an ongoing academic boycott of some universities there's a university in South Korea for example which has introduced a lab focused on lethal autonomous weapons systems and there is an international academic boycott of that I don't believe in academic boycotts the last thing you want to do is stop talking to each other but I'm just saying that it is yes and if you look at the ethics of AI two weeks ago the AI for people initiative published a policy document on the ethics of artificial intelligence and basically it's around these five issues enabling human wrongdoing so we don't want to do that we don't want to devalue human skills we don't want to erode human self determination we don't want to reduce human control we don't want to remove human responsibility and when you start thinking of these sort of things you might say what does that mean some of these things are actually quite far reaching so for example if you think of the Cambridge Analytica scandal and people's voting preferences then this is manipulated by social media and there's an ethical issue there so there's an ethical issue around trying to sell to you what do you want the removing of human responsibility is an interesting one because there are countries that have now started developing policies around giving legal status to artificial intelligence systems so curiously for example Saudi Arabia has given citizenship to Sophia the robot which I think is hilarious because first of all this robot has a female name and it's probably the only female in Saudi Arabia that gets to exist or appear in public without being dressed in a particular way and I think actually there's a sort of misogyny associated with the fact that lots of AI systems are given female names because if these damn things are going to fail well let it be a woman who does it and I have no evidence for that apart from the fact that I don't know very many AI systems that have male names and that's interesting but of course coming back to Sophia the fact she's got citizenship is a dangerous issue because the reason that people think the reason that some camps believe this is a good thing is so that we have something to sue if AI goes wrong but of course the real problem is that if these things have identities then we no longer have there's a removal of human responsibility for their actions so Hawking has often said things like existential threat to humanity I had the pleasure of arguing with him and who am I to argue with Stephen Hawking but I didn't think this I thought that this was interesting as a thought experiment but not practical so I suppose when you look at AI people talk about whether it's going to replace whether it's going to augment or automate in the middle and I think AI technology has always been by and large augmenting rather than automating or replacing and the challenge with AI and the challenge in policy making in AI is separating what is real from what is not real so for example you'll often hear in the media people talking about well the solution to fake news is AI no it isn't because AI has absolutely no hope against fake news like if I point a story to you and I ask you is that story true or false? the amount of work that you would have to do in order to prove that it's true or false is vast like the understanding of language, the understanding of context the understanding of facts, the understanding of spin AI is nowhere close to that so let's talk about policy for a second so I suppose one of the things that we're concerned about is all of these issues so massive technological advancement there are ethical issues that we need to deal with and I suppose also the people in Brussels are very concerned at the fact that Europe seems to be lagging behind in AI now the curious thing is when you speak I've never met, so in the process of working with the commission I've met lots of government officials and the most common thing you'll find when you speak to government officials about artificial intelligence is that every government feels that the country is lagging behind in AI even the Chinese think they're lagging behind in AI when the rest of the world thinks they're running away with it so there's this and of course this is the this is the tyranny of metrics to some extent because how do you measure these things and I suppose some reports show that China is completely dominating on scientific publication which is true about two years ago the American Association for AI's annual conference had about 50% of its submissions and papers were from China in fact there were sessions that were entirely attended by Chinese people entirely presented all papers were Chinese and apart from the fact that people like me and a couple of others were sitting in the room the whole session was conducted in English we left just in the hope that people could maybe speak in their own language and not give such poor talks but China has completely dominated and there's a couple of things about that one is that the ethical approach to AI in China is very different to how we deal with AI so for example one of the things one of the red lines that we're proposing to the commission from the high level expert group is the notion of citizen scoring and citizen scoring is rife in China so your own score there is a social credit system in China that's been developed and your personality, your worthiness your patriotism is being scored automatically and the question is to what extent should we allow social scoring of any kind to be used against human beings Facebook are now using personal scoring to deal with content moderation so if I complain constantly that stories are inappropriate and Joyce regularly does it then we are being scored on the basis of which ones were appealed and so if I'm considered to be a crank or as Joyce is considered to be reliable and when she flags something it's important then they score us based on that so that then next time that Joyce reports something they'll take notice next time I report something that's ignore me and the question is that appropriate and of course all sorts of agencies do scoring but the problem is that they're not transparent in terms of companies Europe is lagging behind in companies we're lagging behind in investment we've probably lost the business to consumer AI markets we're probably and I think when you speak to the commission they probably don't even want to compete on that they want to compete on business to business they want to compete on business to government and with very good reason I suppose Europe is a leader in business to business so things like robotics, industrial robotics supply chain management if you think of companies like SAP for example and there's no real equivalence or competitive in the US so these are the areas that they want us to focus on so over the last number of years just Ken Ken's also on the slide so we had this workshop in January just to give you a sense of what's going on so from there we produced a report a very short report but it gives you a summary of what was going on in Europe in AI at that point and we invited all European member states we invited a representative of government representative industry representative of academia to attend that and most countries not all came along so you can read a sort of a part of the history of what's going on in these various countries taking a very broad definition of what Europe is for example Israel is included here and so on if you look at what's been happening since then in March there was a document published The Age of Artificial Intelligence which I'd highly recommend you read it gives you a sense of what's going on in AI where AI is and it articulates in some sense for the first time I suppose the European priorities for AI and these are things like creating an environment where companies can succeed and it's very interesting when you look at what happens behind the scenes in the market cucko robots which was one of the jewels in the European crown in artificial intelligence was recently bought by the Chinese by a Chinese firm when you speak to policymakers in Brussels they referred to that as the Chinese government bought cucko robots which is probably true and they paid twice the market cap for cucko robots and in fact when this occurred there was huge concern about the acquisition of cucko robots to the extent actually that the European Commission tried to buy cucko robots ahead of the Chinese so the Chinese were actually in a bidding war for cucko robots against the European Commission and the European Commission essentially were trying to get large European companies to buy cucko robots to ensure that it remains a European company which is kind of daft when you think about it like we live in a globalized world unfortunately the Chinese have very deep pockets so we want to strengthen the AI talent base so for example in the last number of months there have been calls from Science Foundation Ireland for example on census research training you see these sorts of things arising right across Europe we have a shortage of talent as they say but everybody has a shortage of talent that there's a surplus of talent somewhere that we need to attract to Europe there's a fundamental lack of that everywhere I suppose the big thing that we want to create in Europe is this human centered approach to AI and that means that AI should be developed for the human being for the citizen and you will see lots of red lines in recommendations from this high level lecture group that try to counter any attempts to develop AI technologies that are not human centered and we're coming again there's lots of pushback on that because obviously the high level expert group has academics involved it has NGOs involved it also has corporates involved so it's an interesting dynamic this at least I could tell you all sorts of stories about the funny sort of games that happen when you're working on ethics for example big corporations asking you to come and get involved so big corporations that will be nameless unless I'm asked who they are in which case I'll tell you doing things that come and tell us what you're doing in the high level expert group and we'll show you everything we're doing on ethics and artificial intelligence but by the way does this NDA that you need to sign before you come it's the classic shoot them before they arrive and of course you sign that thing you can actually participate in the high level expert group at all so you'll be surprised how many corporations actually try that that's silly trick so the race is on in AI so what's happening so I suppose if you look at strategies you'll find that the east is way ahead you know Korea, Singapore, Japan they have very well defined principles Japan has a society 5.0 strategy which is about three pages long you pick it up and say well is that all I'm getting is a strategy but it's actually a very, very well thought out strategy the Chinese government are putting enormous amounts of money into AI I mean absolutely enormous amounts of money there is no bottom to the amount of money that the Chinese are prepared to put into AI and interestingly Chinese AI companies are all over Europe in fact I don't mention any companies names but there are Chinese companies that have essentially by stealth create massive presence in Europe which is interesting China's development plan is interesting when you look at it because it basically says by 2030 we want to be the world leader and artificial intelligence and they're on time they're doing incredible things in the west well Canada has I suppose in the west Canada is the centre of excellence in AI on the high level expert group we have Jean-François Gagné who's the CEO of Element AI and he's also the advisor to the Canadian government on AI and some things that the Canadian government are doing sound completely mad like there's actually a KPI in government departments in Canada which requires that X% of their funding I think something of the order of 10% is put into projects that are so high risk that that money is associated with failure so a KPI in the Canadian government is that you invest in things that are so bloody high risk that they do actually fail and if they haven't failed then you haven't tried hard enough which is interesting because in the commission in Europe if a European project fails for example it's actually considered fraud there's a fraud department that kicks into play because either the reviewers got it wrong or the people who promise the science got it wrong but it's considered fraud so that's interesting the US is basically adopting the strategy of build it and we will fix it later and in the UK I suppose the only countries that have really formally adopted strategies are Finland and the UK the chair of the Heile of Alexbro group it was also the chair of the Finnish AI task force and the French government have actually established Mission Villiani which is if you're familiar with French mathematics he is considered the lady gaga of French science and he wears a purple shirt a purple jacket, he wears a spider he's got the big blocky thing he's quite a character but brilliant so in terms of France there are lots of examples I can share with you of particular strategies they basically all look at the same kind of things so what's happening with data ownership, privacy skills is a big thing, it's ubiquitous so innovation infrastructure so what kind of infrastructure should we have so these are all the big issues in Europe in fact there was a document that I'd also really recommend you read called the communication and AI which came out in April really worth reading because it is basically the structure for the European strategy and AI and I suppose all of it came the Heile of Lexbro Group and the Heile of Lexbro Group as I said at the beginning does these three things it's chaired by Pekka Alapitola who is the former president of Nokia Mobile Phones he was the chair of the Finnish AI strategy Noza is director of research at Inria she's responsible for the ethics dimension and I'm responsible for the policy and investment side and it's interesting I would expect that there will be greater heat around policy and investment because that's what the money is in fact it isn't the real challenge is actually on defining the ethical guidelines and part of the reason is because it's very philosophical so trying to get the ideas is very difficult so there's an AI alliance which I would encourage you to join it's a place where you can have your say it's a place where you can read all about European strategies and AI international strategies and AI the Canadian government are doing an AI that's the place to go Canada is regarded as the hub on AI today in no small part because of Jeff Hinton the great great grandson of George Wuhl and I suppose this is the place to go if you can't be bothered googling it then you can just take down this URL and get involved in the discussions because it's very very important I suppose two big initiatives just before I finish so as well as this all stuff which you could regard as top down there's an initiative called the clear initiative which is a bottom up research strategy for AI and what they want to do is develop essentially a CERN for AI which personally I think is a very bad idea because we don't need a CERN type infrastructure for artificial intelligence but and of course there's also the other question what I call the UN question so does a very famous and I was diplomat that I think very highly of Tim Ma anytime you come up with a good idea that I've ever spoken to Tim Ma was asked the following question yeah but who's going to own it which is a very smart question and of course the problem with Claire is that if they build a CERN for AI the big question is well who's going to own it is it going to be France, is it going to be Germany it's certainly not going to exist in Ireland and so that's interesting about this there's also an initiative called the ELIS initiative which is another bottom up approach which focuses on research excellence in artificial intelligence and of course there's nothing to be argued about in terms of academic excellence we all want academic excellence but again when you talk to people at the commission they think of these things as they think of the investment that's going to be made in basic research in research at all in artificial intelligence it's going to be a very tiny proportion of the wedge that the big wedge the large proportion of the wedge is going to come from governments it's going to come from industry and so while these are important they're fighting about a proportion of the funding that isn't actually the largest portion by any manner of means so one of the things I've learned about doing the high level expert group is that politics is far more difficult than science which is like I'm a trained mathematician and it's interesting doing political science policy you kind of have to learn very very quickly because there are also funny tricks that people pull on you that I did not expect and so I'm happy to talk about that too but it's fun actually I think I enjoy this more than I enjoy sitting in the lab now which is kind of fun so join this A.I. Alliance and participate in the debate and I'm happy to take any questions you might have thanks