 My presentation today will be about conformity assessment and what is the relation actually with artificial intelligence and I will take as a perspective of course a proposal for a law because I'm a lawyer and I work in this field quite some time now and I'm going to look at the AI proposal for AI regulation. I should already say that this talk is very well connected to the previous two basic talks of my colleagues, Professor Ronald Leines who gave an introduction of the proposal for the Act and associate professor Esther Keimolen who introduced the concepts of trust and trustworthiness in AI. But first things first, where is the conformity assessment in the first place? Conformity assessment is to put it simply a demonstration that specified requirements are fulfilled or not. So we need an object to be assessed, this could be a product, a process, a system or a service, organization or a combination of those. We need pre-specified requirements, those can be found in technical standards like think of ISO or the EN, the European standards from CEN and CENLEC or Etsy, these are the European standards organizations. The requirements can also be found in legislation and this is what I'm going to refer to in this presentation. I should also mention that things can become a bit more complicated but we will not go so much into detail today because there are technical standards that are based in the legislation and there are all kinds of legal effects. When it comes to conformity assessment, it's very important to remember that there are three types of conformity assessment. You have a first-party conformity assessment, this means that let's say a manufacturer of a product and I'm going to refer to products from now on because it's more simple. A manufacturer of a product does the assessment himself on whether for instance the product is going to be before being placed in the market is going to be safe or fulfilling some other requirements that are prescribed in the law or a standard. Second-party conformity assessment is when we have let's say a business partner or an end user that wants to use a facial recognition system, this could be the police for example and they want to be sure that this facial recognition system is not going to be discriminatory or is going to fulfill some requirements. Then we have third-party conformity assessment. This is from third-party organizations that have been found to be expert in the field and also have integrity and that they are independent from the product manufacturer. First-party and third-party conformity assessment are very relevant for the proposal for the regulation that I'm referring to today. But that's a spoiler, but what does this have to do with actually public interest and public goals? Well actually the regulator, the public regulator, the lawmakers in the European Union and elsewhere, but I focus on the EU today, they have actually have been harvesting and relying quite a lot on this private law activity because conformity assessment is essentially done by private organizations, private bodies and they have been relying on the expertise of these private experts for quite some time now and I'm already referring to the new approach which actually dates back from the 80s and it's not so new anymore and the relevant new legislative framework as it is called. So what has been happening for quite some time now is that the EU lawmakers have been relying on the private lawmakers or private regulators to draft the nitty gritty details of the requirements of what is for example a safe toy or a product that fulfills some health requirements or that is friendly to the environment. And what this entails in practice is that the law provides only the essential, as we call them requirements, so the law says that a product needs to be safe before being placed in the market and then technical standards organizations and conformity assessment bodies work together with the commission and other public regulators to actually develop the standards that specify what it means for a product, for a toy for example to be safe for a kid of zero to two years old. So what does toy safety and all these issues have to do with artificial intelligence? So as mentioned also by my colleagues in their previous talks, in April this year, so a couple of months ago, the commission published this proposal for AI regulation and while I wouldn't have expected thinking about it some years ago, oh there's the commission proposing a regulation for AI, what is this going to be about? Well it did happen that one big part of this regulation is about conformity assessment and conformity assessment has a very central role in this regulation. It concerns Exante rules, so before actually product an AI system is placed in the market but also what we call as exposed enforcement. This essentially means that there are some measures in place to monitor that an AI system that is placed in the market continues to fulfill certain illegal requirements. And those requirements are actually mandatory requirements that are prescribed in the text of the proposal and they concern a range of things such as transparency but also avoiding discriminatory effects from the use of the AI system and the and other irrelevant things. So what happens is that let's say a manufacturer or a designer of an AI system needs to fulfill these requirements, the legal requirements, but they also need to make sure that this is a process that is verified so that is subject to conformity assessment, so what I referred to before. The AI regulation proposal adopts a risk-based approach and this is a quite an important notion also that is connected to conformity assessment. So the proposal actually qualifies three types of risk but it calls as an acceptable risk and is connected to prohibited uses of AI systems and this has nothing to do with conformity assessment. Then AI systems that are presenting high risk for the safety and the fundamental rights of individuals and these are very well connected to this talk about conformity assessment because conformity assessment concerns these types of systems and then low risk AI systems that are mostly connected to voluntary measures like codes of contact. And those what actually is a high risk or an acceptable risk are determined on the basis of several factors like the sector or taking account the impact on the rights and safety of individuals. Further there are two categories of high risk AI systems and first the regulation the proposal for the regulation distinguishes one category of AI systems that are safety components of products. These kind of systems they have to be subject to third-party conformity assessments so what I said before they have to be assessed by a third private organization but independent from the manufacturer the producer of the AI system. Then we also have what is called as standalone AI systems and this way it becomes more tricky because actually these are systems that have implications on fundamental rights of individuals. I think for example what I said before the system of live facial recognition system used for law enforcement purposes by the police which can turn out to be discriminatory if the proper requirements are not met. So these are systems that have high risk for not safety which is the first category but for fundamental rights of individuals. And what the regulation the proposal for the regulation does is that it mostly connects these kind of systems to self-assessment so that the manufacturer himself is going to assess whether the product is fulfilling some requirements legal requirements and it's okay to be placed in the market. So this is how the process looks like in a simplified manner. If you want a more complicated graph Ronald Lainas has posted on his Twitter account his Marvelous very very complicated graph so I thought not to scare you today but if you want more information more detail you can always refer to that graph. So this is what happens in practice or if adopted actually if the regulation is adopted as proposed the provider of the high risk AI system will go through either the third party conformity assessment or a self-assessment there will be an assessor this is an internal auditor in the case of self-assessment or an external person will assess whether the requirements the legal requirements are fulfilled and then we'll receive a C marking which means that it's okay to be placed in the market or it needs to go through some more fine tuning. Using this system in general of technical standards and conformity assessment can help organizations comply with legal requirements it provides tools and processes risk assessment methodologies but even common vocabularies for the experts engineers to communicate and talk to each other and meaning actually the same thing and then conformity assessment offers also an assurance to organizations that some external in the case of third party conformity assessment some external person has actually confirmed that they are fulfilling their requirements and it also of course offers some information to consumers about the qualities of the AI system without this of course being sufficient and on its own being enough for consumers to say that they are completely informed but that's for another discussion and then it also provides and this is what I discussed also in my PhD dissertation which I defended a couple of weeks ago they also provide meta rules so they make sure that the rules that apply in this AI systems and those very fast developing fields are actually up to date with the latest state of the art technology but of course there are also important legitimacy issues and it's no secret that these are private organizations and they are not following what we know as democratic legitimacy procedures so not all members have an equal say we're not sure it's not all so open and transparent always who sits around the table to make standards and there are also some issues of judicial accountability so can a certification body or standardization body be held liable for defective products that are based on their standards and of course independence and conflict of interest are also a tricky area when we're talking about private regulations because in the end of the day conformity assessment bodies are being paid by the manufacturer and because I see I'm running out of time I will pose these open questions and I'm actually looking forward to getting your input for brainstorming on answers to these questions one open question is how suitable is this whole system of product conformity assessment approach how can we see an AI system that proposes high risks to fundamental rights as a product what are we missing out but it's not in the picture how reliable can self assessment be for these high risk AI systems and then of course there are issues of assessment methodology and part of it is that the assessment will take place on the basis of the intended use of the product and not the actual use of the product so how can we be sure that despite any measures for market post market surveillance those rules will be for sure implemented and followed so this is from my side thank you very much for your attention