 Welcome everyone, this is an entirely different thing than the previous two presentations and indeed let's get up to speed here. So we're told that facial recognition allows systems to recognize and identify people on images and AI certainly has made big strides in this domain. But a few steps further down the road and we can no longer enter stores and access public places anonymously. We're told that Google is very smart at guessing what we're looking for and indeed it is scaringly so but with that it also propagates the most awful stereotypes. We're told that self-driving vehicles will take us everywhere without us having to worry about steering the car and the AI car will know how to navigate empty streets. But then at one point the car will have to decide whether to hit superfido or stock photo guy who appear out of nowhere or crash our car into the lamppost. We're told that our children no longer have to go to the battlefields because anonymous weapon systems or killer drones can and will do the killing for us. But that may gamify warfare to the point of indifference about human life. Do we want all of this? Well I don't think so. So what to do about it? Well given that I'm a regulation scholar or regulatory scholar I would say regulate as in control the whole thing. We have to set boundaries for AI development and use. Now of course I wasn't born yesterday literally and I understand that we're told a lot also in AI land. Not all the AI promises are met not all the positive or negative effects materialize but still serious issues are and will be materializing. So what to do? Well that is not entirely straightforward. AI is a disruptive technology on many accounts. Relations, work, society, life everything will change. We're facing an unknown future positive and negative and with that there is a need to carefully balance interests which sometimes means calling in ethics frameworks to the rescue. Abstract principles as in frame ethics frameworks typically go provide flexibility and room for actions appropriate to the context and these ethics frameworks for AI have indeed been enacted in the European Union, in China, in the US, India, everywhere. Now our problem with these frameworks is that they're hardly enforceable and may lead to ethics washing. I and more importantly others think we need to do more. We need to take careful but bolder steps and that is where the law comes in. In all its variety of subtlety, hard law, soft law, self-regulation, experimental regulation everything will be put on the table. Law may incorporate sanctions which makes it harder than ethics, harder in the sense as more enforceable and the sanctions being in place makes that people will move in certain directions and that may also be the reason why industry is not too fond of law and regulation. Now resorting to law is an approach, it's not the answer because law itself may become disrupted by AI. In some cases, problems simply aren't problems at least not from a legal perspective. There are many rules out there that equally apply to AI as they do to other things. Now if that's the case bring out the whip and enforce. Sometimes the simple legal answers are unsatisfactory but a simple fix may bring the topic back in known territory. So in the past we've had issues with windsurfing boards and segways because they didn't fit the existing categories that we had in law. Now resolving this issue was fairly simple, simply classify the segways where you want them to ride be it on the road or on the pavement and classify them accordingly. Sometimes the issues are bigger and we need more constructive or reconstructive work. So we know for instance that the classical distinction that holds in many legal domains between agents, humans and objects things is breaking down because AI is increasingly taking the middle ground. It's not exactly an object but it's also not exactly an agent and this may require significant legal overhaul but that's for another occasion. Now yesterday we've actually entered another domain and that is the domain of what I've called harder regulation on this slide meaning more significant than simple easy fixes but less than major legal overhaul. So what happened? Now last week the draft, a draft of the European Commission proposal for an EU AI regulation was leaked and really that hit me hard. I didn't see that coming, my mistake but that happens. So now the EU AI regulation may have the same impact as the GDPR had a couple of years ago. Now the draft was leaked last week and I had to teach last Monday and had to do this talk today so I scrambled this weekend to get my head around the leaked draft and on Sunday I came up with this applicability diagram which features the first eight or so articles in the leaked draft and this simple diagram helps you determine whether this thing is for you. Now if it looks complicated to you then that's because it is. Then yesterday 12 o'clock the official proposal was published and we're talking about 107 pages of dense text containing 85 provisions and 17 pages of very relevant annexes that you need to incorporate in trying to understand what this is and yes to my dismay the actual proposal is different from the one that I studied over the weekend. Significantly different event. Now obviously I haven't had time to go through the entire thing and get my head completely around it but I did my best and I'll try to give you a flavor of this whole thing and remember I first saw the actual text 29 hours ago. So what does it try to do? Well the regulation aims to make sure that AI systems that are developed and deployed treat us humans as humans and that they fit in with established fundamental rights in the European Union. So much of it is preventing us from being succumbed to computers that simply say no and if we leave things to to be handled by AIs then only if we're damn sure that they're reliable and robust and in that sense Leica's presentation provided an interesting sort of twist here that we may come come back to. Okay what does the regulation do? Well basically it provides a risk-based approach to AI regulation and that means that more serious measures are taken when the systems are more dangerous and at the core of the whole regulation is the concept of high-risk AI systems and high-risk systems are defined and I'll come back to that a little later but for now note the relatively simple definition of AI on the top and right. So it talks about AI techniques and basically it incorporates everything that we have out there connectionist approach, traditional symbolic logic-based approach but also statistical approaches and then the AIs have to take human defined tasks as a starting point and they generate output and that could be content predictions or decisions. So also things that we see online such as deepfakes which is content fall under this definition and that are clearly addressed in the regulation. Also noteworthy is that certain systems are completely prohibited there is a ban on certain AI systems. Okay let's zoom in a little bit. So prohibited systems are systems that manipulate or exploit vulnerable people or groups for instance through deepfakes but also in a number of other occasions. What is also prohibited is the type of Chinese social scoring applications where people's behavior online and offline is being tracked and where people are assessed or assessed trustworthy or not based on their activities and also what is prohibited is real-time remote biometric identification by law enforcement agencies and real-time remote biometric identification is basically things like facial recognition. There are a lot of exceptions that are deeply troubling to me but maybe we'll get that in the Q&A. Okay the main course deals with higher risk systems and these are defined in an annex and basically there are eight domains in which these are being predesignated and they're listed here on the slide and I've really condensed the annexes but let me pick out one and they're primarily addressing public sector things but let me pick out employment. So clearly within the scope of the regulation are HR type of applications for recruitment, assessing employees and what have you. Now if you have, are deploying or building a high-risk AI system then you must meet requirements and obligations outlined in the regulation and these center around two core concepts. One is risk data and quality management that basically help you but also require you to make system that do what as they're actually advertising and the second mechanism is that of accountability and transparency and an interesting thing here is the the central role that conformity assessment plays so systems will have to be assessed in view of hard standards to be drawn up by standard setting bodies and then leading to CE marks that we see all over products and services and you have to register your system. Sorry to interrupt it. I know I have your time is running out could you kindly wrap up. So okay this is what the new scheme looks like now if you're confused by the whole thing then understand I am too but we have some time we're still at the beginning so the the proposal was launched yesterday and now we will have a lot of opposition from industry from European Parliament from the member states that will lead to changes and maybe significant ones and my bad is that it will take up to 25 2025 2026 for this to turn into real legislation if we get to the point of harmony at all. Now if you're interested in keeping up with my adventures in trying to frame the regulation follow me on twitter.