 Hello everyone. I would like to introduce David Barnard-Wills who is going to be doing a presentation on the EU's Proposed Artificial Intelligence Act as an approach to and driver for ethical learning technology. Just to let you know, some of you may have seen watched David's presentation previously. We had a few technical difficulties during that presentation so this is a re-recording. David very kindly came back to re-record the session and we've included some of the original questions at the end that were asked during that session. So without further ado, I further ado, I am going to hand over now to David for his presentation. Thank you very much. No thank you very much and thank you for having me as part of the conference and thank you for the opportunity to re-record after the massive technical failure on my side. Great. So yes, I'm David and I'd like to talk to you about this piece of regulation that's being proposed by the European Union and some of the implications this might have for ethical educational technologies. A little bit about trilateral research first if I may. We're a technology company with a very strong background in research projects. So over the last kind of decade or more we've been doing a large number of research projects in the European Union, often around issues of kind of the social, political, ethical or legal implications of new technologies. So our ambition these days is to work on developing ethical AI particularly for the public sector to help tackle complex social problems. So some of the services and things we offer are sort of interdisciplinary blend of technology and social science expertise in our kind of socio-tech insights work. We do research and policy advice but we also offer a data protection and cybersecurity service. So we do outsourced data protection officer work and some of the clients for that include people you will know such as University of Cambridge, Victoria and Albert Museum. We're doing a research project at the moment with the Greater Manchester Combined Authority and a bunch of other kind of European agencies. So we pay close attention to the kind of the European policy space in particular. So the AI regulation that's come out is a very big interest for us at trilateral. So we'd like to share some of what we're kind of thinking about it with you. So a little bit about me. I'm a senior research manager at trilateral and what that means is I put together research projects. I try and get funded and if they're successful in getting funding I manage some of them. So my team work on projects around like applied security technologies, political language, things like that. So my academic background is in politics but for 14 years or so I've been doing politics of technology and that includes things like applied privacy by design, ethics by design. I've done research on how regulatory agencies do privacy and data protection governance. I've been a research ethics manager across a number of these projects so I've handled the informed consent things, the data protection things that make sure these kind of projects are large scale projects are legal and ethical. But one of my areas of interest is in technology foresight and looking forward and seeing what's kind of coming down the pipe. My structure for this talk today is about primarily what is the EU proposing. So I'll try and give you a kind of a summary or an overview of this proposed regulation. I will talk a little bit about what our perspective on this is at trilateral and then I'll talk about some of the implications for the educational technology world might be and if I have time a little bit about the relationship between this regulation and the broader field of AI ethics. So firstly what is the EU proposing? So in April this year they proposed a regulation which is full title is something like the regulation for laying down harmonized rules on artificial intelligence and it's going to be short version of that would be the AI act, AI regulation or some variant on that. So what's interesting about this is it's one of the first proposals worldwide for a legal framework specifically around artificial intelligence. The aim of this for the European Union is to make the EU a global leader in artificial intelligence but also AI regulation and ideally ensure the development of safe secure trustworthy AI within the European Union but also facilitate a single market. So it's the aim of this is to prevent a kind of market fragmentation with different countries, different member states adopting their own artificial intelligence law. So the idea is that you would have a set of common requirements that apply across the EU. It will be a regulation so what that means is that when it comes into law it comes into it's applied in all member states and they don't have to kind of pass individual legislation to bring it into into into law so the idea is you have a kind of harmonized thing across the Union. Now this is a draft so that means it's open to change it's going to be discussed and debated by the European Parliament and by the member states so the actual law that you can download off of the European Commission's website now is subject to change before it comes into force. A little bit about timelines this process is going to take a couple of years and then it's very likely that it'll be a couple of years after it's passed before it becomes applicable. So we're looking out to 2025 say. Okay so I think we'll talk a little bit about the scope of application. One of the things to note about the Act is that it starts off with a very broad definition of artificial intelligence. It's focused on artificial intelligence as applied in systems, as applied in technology so being used in the world. But there's a variety of things here which could encompass almost anything you might want to call AI and some things you might just want to call statistics. The idea here for the Commission was that this definition should be sort of as technologically neutral and future proof as possible because of the vast pace of change in this field. But it brings a lot of things within the ambit of artificial intelligence and therefore within the scope of the regulation. Where does this apply? So it's going to apply in the EU but also also applied to any AI system that is being brought onto the European Union market or being used in the EU or its use effects of people in the EU. So it has that same sort of extraterritorial jurisdiction that the GDPR kind of said you know if you're processing personal data about about European citizens then this law applies. It's going to apply to both public and private actors. It's going to have implications for providers, developers, creators of AI systems, people who import it into the EU, people who distribute it within the EU and for people who use it. It doesn't apply to private non-professional uses so if you're building a toy AI system for your own purpose then it doesn't apply. It also doesn't explicitly apply to research but it's kind of trigger point is bringing things onto the market. It's also not going to apply in the UK. The UK is taking its own path on the regulation of artificial intelligence. It is setting out kind of roadmaps and plans for that which are going to be applicable in a sort of similar timeframe. So it's interesting to see how they will compare. One of the key aspects of the regulation is that it chunks AI systems up into four categories based on their risk to fundamental rights and safety. So this is based on the idea that from what we know about certain systems and what we already know about their impacts in certain places and also things we think are particularly socially important we can make some guesses about the risks posed by different types of systems. And this is something we've been doing in this sort of impact assessment field for a while but I will take these in sort of reverse order from the minimal risk. Now minimal risk systems are those which the authors of the active visages as having kind of essentially no risk to rights and to safety. So these are things like the automated spam filter on your email account, the AI moving a character around inside a video game to give it a semblance of intelligence. So for these low risk categories developers can choose to reply the requirements from the act or they can just not or they could even adopt voluntary codes of conduct. The next category up are those that are seen as providing low or limited risks. So these are the AI systems in these categories are things like chat bots or when a system recognizes your emotions and changes its behavior in response to those emotions. The only requirement the act would place on these is that they are transparent that by the fact there's an AI system here. So you would need to know you're interacting with machine in that sense. So if you think of the AI, the Google bot that would phone up a hairdresser and book an appointment for you within the EU after passage of this regulation that would have to say something like I'm an AI assistant in trying to book an appointment. It doesn't make any specific requirements on how it would do that but only that it is transparent. Now the next big category of risk and this is where most of the meat of the regulation sits I think are those AI systems that are seen as posing high risks to fundamental rights and safety. So there is a specific list and X3 presents a list of these these topics and there they're kind of all where all the interesting things are. So they're the sort of critical fundamental infrastructure of society things like that. So they are things like education, employment, access to justice, immigration and border control or enforcement, law, critical infrastructure, democracy, biometric ID. It's a bit it's a big list but it's kind of where all the interesting stuff is. And these are the areas which are seen as sort of so important to social functioning that we need some proper protections in place. Now there's also some other additional things so if AI is embedded in a system that already needs third party assessment or if you're using AI as part of a safety system then it's also in here because it's just dangerous if it goes wrong. Now for these for AI systems in this category you get a big set of mandatory requirements on them before they can be put on the market or place into service. And finally we have a set of AI systems or uses which are just seen as posing an unacceptable risk to fundamental rights and safety and these are therefore prohibited. It's not a very big list but it includes things like large-scale social scoring by public authorities so the kind of social credit systems where everybody gets a rating. It includes AI subliminally manipulating human behavior and it includes real-time remote biometric ID apart from some some edge cases and defined uses by law enforcement. So I think this is an important point to focus on is if you are developing or marketing an AI system that fits in that high-risk area then these are some requirements that are going to be set on you. Now each of these gets quite a lot of detail in the regulation so we can kind of you could you could look at that if these seem important to you. But there are things like having adequate risk assessment systems, having good enough quality data, feeding and training system in order to minimize the risk and risk of discrimination, logging to make sure that you know what the system is doing and to be able to allow people to kind of go back and look at what it's done. Very detailed technical documentation requirements which are supposed to provide all the information someone would need to assess its compliance with this law and that would include registration of these AI systems on a database, a centralized database that you is going to create. You'll need to provide clear and adequate information to the user. You will likely have to go through a conformity assessment procedure probably with a third party doing some kind of assessment work and there are other standards around you know sharing information when when things go wrong. There's quite a quite a big set of things here. If you are a user of a high risk AI system on the other hand if you bought one in from somewhere else and I think many people in education might end up in in this bracket you know if you're purchasing an AI based educational technology or educational learning system tool there are some obligations on you. Now they are to follow the instructions provided with the with the AI system to ensure that your input data is relevant to purpose relevant to the purposes you're going to use the system for. You have to have adequate monitoring of the operation system so you need to know if it goes wrong and you need to inform the the provider of what went wrong so keeping logs and all that information that the provider should be giving you now you need to use that to support your GDPR obligations to do a data protection impact assessment. There's some good stuff though that comes with this. I think this is one of the most interesting bits of the regulation for for users of AI systems you know institutional users rather than individual citizens perhaps but there should be this this regulation is going to drive providers to have to create all this detailed technical documentation which should give you all the information you need to tell if this thing is in compliance with the law. Now this will include instructions on how you're supposed to use this AI system but also some really useful stuff on like explicitly what it's intended purpose is what the manufacturers know about its accuracy and its robustness what they know about the seeable circumstances that might lead to to risk of health safety discrimination privacy invasion things like that and what and other things like the last one is really interesting is because one of the things we're learning about AI systems as they become deployed in the real world is they can be quite brittle they can fail when circumstances change or when the population they're being used with changes so this requirement puts in place kind of some need to provide some information on what the expected lifetime of an AI system is and what kind of maintenance or care measures is going to need to keep it keep it functional the regulation create envisages creating these these new governance arrangements so there will be national supervisory authorities who will have a responsibility for kind of understanding what the what's going on in the AI system market now these might be existing authorities interested with new tasks or they might be entirely new organizations so you might envisage some countries might give these powers to their information commissioner or or some others might give it to it to a consumer markets authority or very you know this would be interesting to see what different countries do in this space now they will have certain powers including they can they'll be able to do investigations they'll be able to remove offending systems from the marketplace they'll be able to demand access to documentation and data necessary to do their enforcement activity and they are being tasked with running what are called regulatory sandboxes so these are kind of experimentation spaces where a developer can bring some data to train a system with insight from the from the regulator and potentially allow them to use data that it might not otherwise been able to do because there's sufficient safeguards in place at the level this will be coordinated by new artificial intelligence board chaired by the european commission with representatives from each of these supervisory authorities and the european data protection board the legal penalties for this financial penalties for non-compliance haven't quite been finalized but they look to be around the same type of level as gdpr fines so pretty substantial okay so what do we make of this a trilateral we are glad that there's regulatory waits behind secure trustworthy and ethical AI and the last few years and various kind of scandals have shown us that there are potential harms from the unregulated use of AI i think the world is becoming more cognizant of this of the things about kind of discrimination and bias and being unable to challenge the results of something coming out of an algorithmic process so these are these are becoming kind of more top of mind and it's good to see some regulation responding to this so for us regulation is part of the air i feel growing up and becoming more mature it's one thing to have artificial intelligence used in in in entertaining toys or apps or in games but as it becomes part of the kind of fundamental structure of society as it becomes part of educational systems other becomes part of transport systems as it becomes part of health care it just is going to need to meet more rigorous standards and to will inevitably be regulated and what we are glad to see is the various things that the regulation puts forward so transparency better documentation explainability the quality of data sources developers knowing what's in their data and how that might affect the accuracy of the system those are things we've been recommending to our commercial clients and to our partners in research projects over the past few years so it's good to see those in place now the red lines on prohibited AI could be stronger and that's that would be an interesting place to watch as the as the regulation goes through the european parliament but fundamentally we don't think this is going to strangle AI innovation AI innovation is there's such a demand for the use of AI systems that that's going to it's going to be a strong persistent driver but what this will do is is put some controls around that some could walk some quality controls really and meeting those requirements might be a challenge for small and medium enterprises that that is something we acknowledge there are a lot of things there and it's going to take a lot of effort to put those things in place and what will be fundamental to how this pans out is what the capacities and the abilities of the regulators are so and this has been something we've noticed with the with the GDPR where some regulators in some countries are very well resourced they have good technological knowledge they have kind of a willingness to to do enforcement activities and in other countries they are and they are understaffed and under resourced and that has had an impact on GDPR enforcement so you could see you could equally see something like that emerging here it will be also interesting to see the balance those regulators adopt between enforcement so going up to people who break this regulation or versus support and guidance you know trying to encourage people to comply well right telling people how to set up a post-market monitoring system for example and the shape of those regulatory sandboxes will also be quite interesting I think so if I can talk about the implications for educational technology now I think that you can roughly break them down into categories about whether someone is kind of a provider or a user or a researcher and if they're inside or outside the the the EU so if you're using educational technology or producing educational technology with no AI component then this doesn't have any impact on you but I remind you of that very broad definition of AI that I showed you early on in the presentation that lots of things will come under this if you're an educational technology provider that does use AI in the EU you'll need to be in full compliance with this regulation if you're trading into the EU or giving systems you know if you're doing it about open source but you're providing it for free or as a social benefit then you also need to be in compliance with the regulation it doesn't matter if you're getting paid or not if you are an ed tech user in the EU you'll have those article 29 obligations that we talked about earlier and all that access to all that potential information now the if you're outside the EU legally formally no effect but like with the GDPR we might start to see the so-called kind of Brussels effect where this regulation starts to you know other people might start to use this regulation as a model large companies might just start complying with the EU standard because it's an important market and then that standard might become de facto influential around the rest of the place and you might start to see more made in the EU AI sort of you know as a as a promotional tool and you will be able to get more information about AI products because of those the demands this regulation creates for additional information if you're a researcher if you're an ed tech AI researcher and if you're either in the EU or you're interested in participating in the EU's kind of funding programs the regulation doesn't explicitly apply to research it's complaining it's concerned about deployment and kind of putting things into use but horizon Europe does have these requirements these kind of cross-cutting requirements for trustworthy AI which largely map with this regulation so it's similar and if you're doing innovation work that is closer to market or closer to kind of exploitation then you will need to start to be ready to comply with these things towards the end of those projects perhaps if you're an ed tech AI researcher outside the EU I'm not I'm not sure what the implications will be but this will be an area which might preoccupy some of your your European peers so I think it's worth stepping back a bit to see what's the relationship between this act and broadly ethics at ethical AI so law isn't ethics and ethics isn't law they can't be substituted for each other they're doing different things the EU has been active in the area of AI ethics so there's these high level expert group guidelines on trustworthy AI an assessment list for for kind of trying to understand it the AI system you're developing is it is ethical or not and I think interesting for this space in due in September next year there is an expert group on AI and not ethical AI in education teaching and learning which should be putting out some guidelines in September next year that might be quite useful I know that the the framework for ethical learning technologies is more focused on practitioners on the kind of the ethics of of being a learning technologist but I think the there's elements of this regulation which will support some parts of that practice so the kind of requirements from accountable practice compliance with relevant laws obviously it's will help you if you're trying to be accountable and explaining your decision making and things like that and the whole purpose of this regulation is to minimize risks from from AI um it it doesn't of course doesn't capture the whole world of AI ethics right so it's very much focused you should easily think of this not as an AI ethics regulation but as a trustworthy AI or even an AI safety regulation so it's very much about kind of safety risk assessment reliability um having transparency mechanisms in place that allow people to do governance and be accountable and to trace where things go wrong and things like that it doesn't touch on the broader world of AI ethics so um you know when is the appropriate when is an appropriate use case to use an AI system or not what are the implications for kind of a massive industry based around extracting data from people and using that to train AI it doesn't have anything to say about the kind of cultural social impacts of artificial intelligence and it doesn't really have anything to say about kind of professional values of AI practitioners nor does it have anything to say about the composition of the AI industry and who's able to get be part of that and what how that might influence how the AI industry develops so you know but it it's limited but perhaps we wouldn't want to regulate those things um okay that is about everything I want to say about the AI regulation um if you would like to stay informed uh with our work on this as we we're very interested in this and we're going to keep tracking it as it as it develops you can find out more about us and our work at travelresearch.com if you'd like to to ask me in particular a question you can either find me on on uh my email or you can find me on twitter um we have a newsletter um you can sign up on our website um and that will um as we as we learn more about the AI regulation I'm sure we'd be producing content um on this topic for that newsletter thank you so much David and thank you again for re-recording this session for us um I I've made a note of some of the questions that were asked um during the previous session so um I'll ask those again but I actually think maybe you've covered some of them in your presentation the first question was on when do you think when or if do you think other regions will introduce similar um legislation yeah um I I imagine that other regions so the UK the UK is putting forward it's sort of it's sort of starting to think about how it's going to kind of regulate or kind of AI so I think it's perhaps a little bit behind um the EU on this although it's taking a explicitly sort of divergent approach um I think if I was in another country around the world I'd be looking see how seeing how this goes you know in a way I think it'd be interesting so the GDPR became quite influential and in part that's a dynamic where you know an organization a large organization helps to kind of comply with it so it builds the processes to comply with it which means that it's not actually averse to complying with something similar in another jurisdiction it actually makes it sort of simpler um so uh and and you know countries saw that the European data economy did not collapse under the weight of the regulation so um you know there's less less kind of aversion there but again there's like a lack so but whereas GDPR has been influential on other countries regulations it's starting to see that now I think so maybe it would be another what would be another kind of five years yeah wait and see yeah I think so you know we'll pay we'll pay attention to it we can come back in a few years time and see how it's come another question that came in during the live session was um about the deep fakes and the fact that that appeared in the low risk um area um and there was a lot of surprise around that people thought that that was maybe one of the more high risk things that AI is capable of if you any comments on that yeah so um the the EU does see deep fakes as a problem um I think it's less concerned with the technology as the problem the more kind of the proliferation of deep fakes across kind of social media and having a system that's kind of informed enough to understand that an image might be manipulated in certain ways um so that category is around kind of artificially created images so sometimes you'll use you'll use the same technology to make a deep fake to um you know just fill in some background on a film or something or just create you know create a bunch of um uh AI generated faces to put on a website you know just a stock footage or something like that so there's lots of uses for the technology that are not the manipulative um you know propaganda use case that we kind of envisaged when we talk about deep fakes so I think that's where they're sitting with this is is not so much the um the broader social problem of deep fakes but how about how do we regulate algorithmic generated images and content and things like that okay thank you very much and the final question it's sort of a combination of a statement and a question so one of the attendees had commented that there's a lot of hype around AI especially in education at the moment and their question was is the act realistic yeah I think I think it is realistic I think it's um it's it's for example we're not trying to regulate um artificial general intelligence here we're not trying to kind of we're not envisaging a type of AI that's you know smarter that's smarter humans or smarter than humans or things like that it's very much about AI systems being deployed in real world contexts so it is about you know um if we're using a system to uh you know evaluate students work right you know if we're doing something to kind of do it to automate a marking pro an assessment process or something like that then that is a it's not it's not skynet um but it is it is it is you know a thing that might actually work in practice so it's I think in that sense it's quite a practical I mean that you know the practicalities of how this works um you know how it works in practice but in terms of the type of AI that it's focused on is the type of AI that's kind of being put into use and being deployed so in that sense it is a realistic I guess well thank you very much that's all the questions that I had made a note of from the live session um but as you say you've got your contact details there for anyone who has any who wants to get in touch about anything further um thank you once again David really appreciate you taking the time for the alt winter conference thank you very much