 Good afternoon and a very warm welcome to the IAEA's webinar on the Artificial Intelligence Act, a balanced approach. My name is Joyce O'Connor and I chair the digital group here at the Institute of International and European Affairs. It's my great pleasure to welcome our distinguished speaker today, Yordanka Ivanova, a legal and policy officer in the European Commission, DG Connect. You are very welcome, Yordanka, and thank you for taking the time with us today. We appreciate it very much. As a member of the legal team who draft the commission's proposal for the regulation of AI, you are in an excellent position to discuss the Artificial Intelligence Act with us today, and we look forward to your presentation. Yordanka will speak to us for about 20 minutes, and I look forward to receiving questions from you, our audience for Yordanka, through the Q&A function at the bottom of your screen. Today's webinar will conclude at 1.50. A reminder also that today's presentation and Q&A is on the record, and please join our discussion on Twitter using the handle at IAEA. The European Commission's Draft Artificial Intelligence Act is the first comprehensive attempt at a global level to regulate specific issues of AI systems. It's probably a year ago in April 2021 that the Commission unveiled these regulations in the proposed AI Act. Following an extensive consultation process with all stakeholders, it is now going through the EU Legislative process. The proposed AI Act is a risk-based approach to classify AI systems with different requirements and obligations, according to their intended purpose and level of risks. Of course, a number of questions arise. Is this framework based on the right principles and approach to foster innovation and trustworthiness? What impact can we expect on AI producers and AI users? The EU Commission believes that the proposed AI Act should become the global standard if it is to be fully effective. Will the AI Act boost uptake of AI and guarantee a human-centric approach? Jerdanka will outline the thinking underpinning the AI Act proposal, and she will discuss the latest developments in relation to the proposed Act. As I've said, Jerdanka is a legal and policy officer in the European Commission DG Connect, which is responsible for AI policy development and coordination. And she is a member of that team who drafted the Commission's proposal for the regulation on AI. Before joining the Commission, Jerdanka worked as a researcher and attorney at law, advising companies on EU regulations in a wide range, including the area data protection, consumer rights, digital services, cybersecurity and copyright. Over to you, Jerdanka, and we really look forward to your presentation today. And thank you very much for being with us. Thank you so much, Jerd, for having me here with you and letting me present you the approach of the European Commission for the artificial intelligence and how we try to promote the trust of this technology. I'm just trying to share my presentation. Apologies for the hiccup. I hope you see it now well. And, yes. So, it's great to be here with you and engaging in this discussion. As a starting point, I will try to say that indeed, we have the Commission proposal package from last year, but it's been a long journey before we reach here. And artificial intelligence is one of the key priorities for the European Commission on which we have worked over the past almost four years and all this started already in 2014-18 when we had the first strategy. We have also, as mentioned by Joyce, a very extensive process and involvement of key experts, including from Ireland, from research academia in our high level expert group, which helped develop the guidelines for ethical and trustworthy artificial intelligence and all this also led us to the consultation process, which started with the white paper on artificial intelligence. This is an exceptional step in the EU legislative and policymaking process because what we really wanted to do with this initiative is to engage broadly with all stakeholders and think together how we can achieve our twin objectives, having the right ecosystem of excellence in Europe, which can help us develop this key technology for our digital sovereignty that can bring a lot of benefits also for the public, for consumers, for businesses, but also addressing the risks and ensuring the right level of trust of consumers and also guaranteeing that its use is aligned with our values. So with the white paper, we had extensive feedback from all kinds of stakeholders, more than 1,200 papers and a lot of discussions also with broader AI alliance of experts we have organized and now this helps us actually put up the two key deliverables last year, the coordinated plan on AI, which is the review of already set of actions we had set together with Member States how we can develop AI in Europe and make Europe a real world-class hub for development of AI and also the first proposal for regulation of artificial intelligence, as mentioned by Joyce, this is the first comprehensive attempt to regulate artificial intelligence, although we see that now in the national field there is an ongoing discussion and cooperation actually in setting common framework and standards so we are also working very much together with our partners while at the same time agreeing those actions internally in Europe. I will quickly present you briefly the coordinated plan because it is very important. The first deliverable is actually set out the joint commitment between the Commission and all Member States to develop together a very concrete set of actions, how we can develop artificial intelligence, make it accessible to companies, to users or around Europe and here we have a broad set of objectives, what we need to set is the right conditions including in terms of greater access to and sharing of good quality data, fostering computing capacities, collaboration with broad stakeholders, increasing our research capacities, creating also the right testing and experimentation infrastructure with specific facilities we fund, and also with the digital innovation hubs, probably you are familiar in every Member State, we'll have also with the new digital Europe program, even one of those hubs focus specifically for artificial intelligence, so with this we aim to give broad services to all companies, public authorities who are interested to test and invest in artificial intelligence. And then we also have a lot of actions in the area of skills because human resources are key, we need to keep our talents in Europe also to be upgraded, including also to cooperate internationally with partners and focus on some specific high impact areas, as you see here where we see the highest added value including in the mobility, agriculture, climate, health, public sector and others. So, this is really how we try to create, to encourage and to invest in artificial intelligence in Europe, but as I said before we have also been mindful that certain uses of those artificial intelligence technologies, they affect the society, they, because of the specific characteristics of AI like bias, opacity, dependency, unpredictability, they can also cause specific risks, and we, our objective is to ensure that this technology is trustworthy, and that it can deliver the benefits it promises. So that's why together with the whole set of actions, how we try to promote AI in Europe, we also put forward the first regulation, where we try to set the single European law, how artificial intelligence in Europe can be developed and promoted on the European single market. So we know already that we have a lot of existing legislation at place, but as I said before, because of the specific characteristics, because of the legal uncertainty and difficulty actually of effectively enforcing those existing crimes, we have seen the need to complement this legislation and create a single framework for internal markets, with common rules how companies can place on the market and use those systems. So this is a horizontal legislation, its objective is to apply across all sectors, public and private, because we think that this is the best way to create a consistent and clear legal framework that also builds trust in the market across the board. And our two main objectives with this legislation is indeed through this specific requirements to address the risk specifically to safety and fundamental rights, given the specific characteristics of certain AI applications, and also to create common rules and a single market in all European countries. So companies know how they can develop and market in a legally sound way they approach this in the union. We made an attempt really to complement not to overlap with existing legislation, so that's why this should not be conceived as next specialist for data protection or others. We are really here creating market based rules for how those systems are actually produced and marketed in the union, but we have also made sure that we are complementing and we are consisting also with with this existing legislation. And where relevant we have also tried to integrate specific procedures in other sectoral legislation that already exists. So we are building it very much on the product safety legislation framework and for already products for example that are now going to embed artificial intelligence. We are integrating and following the same new legislative model that we have already successfully achieved in the common European market for both. And with those rules, we also as mentioned by Joyce we try to be really proportional and regulate only what is strictly necessary to address those risks because we think this is also important for innovation to keep the rules clear but also limited to what is necessary. And also to give certainty to operators and to build trust in the market because we have seen already some pushbacks if those technologies are not properly designed and use that this could also negatively affect and actually discourage consumers, but also users to use the technology. And to keep our rules proportionate we are indeed following the risk based approach, and we are trying to set a common level playing field for actors who are designing and placing those systems irrespective of whether they are based in Europe or outside so the extent those systems are given to users in Europe and they produce their effects on people. We applied the same rules, which we think is very also important for the level. So here this slide is very important because it shows indeed the risk based approach we are following, because we don't try to regulate AI in general, we see that different use cases and applications for different risks. And to really keep the regulatory burden to the minimum, we follow this risk pyramid, and on the bottom, actually this is the largest category where we think that the majority of the existing applications are posing minimal or insignificant risks to fundamental rights and safety, and the existing legislation is sufficient to regulate those systems so we don't need to impose additional rules. But of course, if there is an interest they can follow also voluntarily similar ethical standards. Then on the yellow category you see the category where we think that certain AI applications like chatbots or deepfakes, because they have these risks of manipulation. It is important that we have this transparency towards users and they are informed if they are communicating with machine or with deepfake. So we think that this is more and more important in our world where we are engaged in a lot of digital interactions where chatbots are more and more resembling humans as well as deepfakes. And this is important to build trust in the technology. In the orange category we have the core of our proposal which covers high risk AI systems that are very specific cases we have identified when we see that those applications could have a very serious impact on fundamental rights or safety if they are not properly designed. So that's why we propose for those systems specific requirements, I'll talk more and on the next slide for them. But and also specific procedures before those systems can be placed on the market so we are sure that they are compliant. And on the top of the pyramid, we also have certain use cases of very limited for use cases where we think that these are really uses of artificial intelligence which are incompatible with our fundamental values and that's why it is important to clearly say that we don't want them in neural. Part of those are, for example, a manipulative and exploitative AI application, social scoring by public administration or renewable metric identification for personal purposes. And here I would like very briefly to touch upon the high risk categories again because this is maybe Joyce mentioned the impact of the regulation. This is the core impact of the regulation actually because providers of those systems that are built into categories will have to comply with these new requirements we propose. And then undergo also these conformity assessment procedures. So here we are trying to give very much legal certainty to operators, if they will be in the scope or not. So that's why we have identified certain AI systems that could be safety components of already regulated products like medical devices, machinery, which are already subject to third party conformity assessment. And also some other systems on the so-called cell-to-loan AI systems that are broadly grouped in eight areas where we see mainly implications for our fundamental rights because those issues have an important impact on us as society and also as citizens. For example, here you see the broad categories of biometric identification, critical infrastructure, certain use cases in education, employment, access to enjoyment of public services, law enforcement, migration, as well as administration of justice. But it's important to say that it's not the broad categories, we have really looked and identified specific use cases within those broad categories that will be only subject and we think that it's important to start with small use cases but be able to build progressively and add more if we see that there are more risks. So let me give you an example here. For example, in the education, it's not all AI applications that can be used in education, but only two use cases, for example, those that are really used already to determine eligibility of people to access education or also to assess them during tests and examinations, because we think that these are really the most important ones. And the same one, for example, for employment where we have only two limited cases, so we have really tried to be more proportionate in our approach but also ensure that this is a future proof regulation and we can add more along the line. And here we try for those systems to propose some very good baseline requirements that are built on the best practices that already exist, how to address the risks of artificial intelligence, those specific characteristics mentioned, capacity, unpredictability bias, and we build very much on the requirements of the high level expert group I mentioned before and we propose that it is important to have a risk management process where those risks are identified, to have good quality data so we ensure that the system is not biased and accurate, to have also documentation and logging capabilities so we ensure this with the ability, transparency and also sufficient information for users, human oversight measures, as well as robust and accuracy and cybersecurity. And our objective is to support this high level requirements with harmonized European standards, which will be developed by the European standardization organizations. So actually, our objective is to give the right technical solutions to companies, how to build designs of systems so they overcome these problems and they really ensure this human safety and transport net of the application. And then there are also a number of obligations for the providers who are mainly responsible to design, develop and market their systems and also check them with those conformity assessment procedures before them. They will also have to fix the C marking so this can build trust and it's a sort of a certification scheme and also register part of those systems in a publicly wide database, where we think that for users who rely and buy those systems from the market we try to have really only the limited obligations needed to ensure they just exercise the human oversight. And this is of course without prejudice without other existing legislation like the GDPR that normally applies to them. And we also have specific measures to support innovation with innovative tools like regulatory sandboxes where we try to encourage regulators and companies to cooperate, work together, experiment in a safe and controlled environment, how those AI systems are built. And also specific support measures for SMEs and startups because it is really important that the burden on those companies does not dissuade them from developing and investing in those technologies and that's why they need special support with the AI regulation and the coordinated plan to give them. And very briefly, this is just the government system because it's also important to have a good cooperation between national level, where we envisage mainly the enforcement of the system with national authorities, they will also have to cooperate with other responsible authorities, for example, linked to risks to fundamental rights, and also have a coordination mechanism at European level with an artificial intelligence board we are going to create. And I'm also an expert group with broad representation of stakeholders, similar to the high level expert group we had before, because it's really important also in developing and implementing this legislation to have involvement of all relevant parties and stakeholders. I'm going to finish by giving you this timeline which shows that now indeed we are in process of negotiations, there are important changes that are being discussed by the co-legislator, the parliament, the council who are making specific proposals. So it's a very interesting process, a lot of engagement too from stakeholders. And after we hope we can have the adoption next year, we will also have a three years of transitional period, so companies can prepare. And then in the meantime we also plan to have these harmonized standards and the technical solutions so we can actually help companies demonstrate compliance. So I stop here, and I'm very much looking forward to feedback and engaging discussion with you for the proposal, but also how to build together in Europe this ecosystem of excellence and trust.