 Good afternoon and welcome back to day two of the alt online winter conference. I am very pleased to introduce our next speaker He is David Barnard wills from trilateral research And he's going to be talking to you about the EU's proposed artificial intelligence act At the end we'll have time for questions So please use the YouTube chat window to post any comments or questions for David there and Without any further ado, I'm going to hand over to David. Thank you very much. Thank you very much Emma It's a pleasure to be here and thanks for thanks for having me So yeah I'm gonna I'm gonna talk to you about this this big piece of EU legislation that is kind of coming down the pipe over the next Few years and hopefully give you a bit of an overview if you're if you're unfamiliar with it Or dig into some some of the kind of connections between the regulation and AI ethics if we have time So a little bit about about trilateral research We're a company based in the UK and Ireland and we have a strong background in research projects So we've worked on many many European funded research projects Often in the areas of data protection privacy by design Security technology things like that, but we've got a kind of a growing area in AI ethics and AI governance and AI regulation we offer services around commercial services around sort of data protection and cyber security and interdisciplinary blending of sociological and technical insights into into kind of teams our kind of aspiration as a company is to Find ways of developing ethical AI for actually that the public sector can use to actually tackle complex social problems You can find more about us at that trilateral research calm and Yeah, that's that's us. We we have clients including University of Cambridge. We're their data protection officer Clients like the V&A greater Manchester combined authority with whom we're doing a project on joining up data to combat modern slavery and human trafficking and we've done work for people like the information commissioners office or the European medical agents So that's the company Me I'm a senior research manager at trilateral. What that means is I build research projects I put them together and I try and get them funded And if they're funded I am if I'm back enough to get them funded then I manage some of them My academic background is in politics So I kind of worked on politics of identity systems and then since then I've been doing various types of applied Politics of technology research. So learning how to do privacy by design looking at how Institutions do do data governance. I've been a research ethics manager across many of these projects sort of dealing with their informed consent processes and the data management processes and things like that and I do technology full site work I look at upcoming kind of technological trends and try and understand their their social political economic impacts What I'd like to do with you today is an overview of what the EU is proposing in terms of AI regulation our view of this from trilateral research Have a think about what some of the implications for educational technology might be And if like I said if I have time some will think about the relation between this regulation and the sort of broader field of AI ethics so in April Middle April this year the EU publishes its proposal that's European Commission Sorry publishes its proposal for a regulation laying down harmonized rules and artificial intelligence otherwise known as the Artificial Intelligence Act Or the draft AI act or the draft AI regulation So you might hear it called by some of these different terms and it's a little bit innovative Because it's potentially the first proposal for a dedicated AI regulation Worldwide Now what we have currently is And this comes out as comes out of a series of kind of policy initiatives that the European Commission has been doing over the past few years You know setting up high-level expert groups on on trustworthy AI various different kind of strategies, but so this is the key key moment in that move The the text we have at the moment is subject to change and before it becomes a law So it will be discussed and debated by the European Parliament and by member states And so the actual text is subject to change and one of the things we're doing at trilateral is is tracking that and trying to understand How you know how it actually comes out? It's a regulation so it's like the GDPR in that respect in that once it comes into force It's applicable in all all EU member states without having to be put into international law In terms of time frame It'll be a couple of years before this becomes a law and then there's there's a planned Two-year implementation period before it comes into force. So we're looking before any of this becomes applicable We're looking out to sort of 2025 I think the EU's goal with this is to You know as with most things create a kind of single market in in AI Avoiding kind of fragmentation across the sort of different member states with different regulations and different policies But and then kind of position EU as a global leader in in artificial intelligence and but also a global leader in artificial intelligence regulation What does this apply to a very broad definition of AI systems? so if you look at these kind of the these these are three types of definition taken from the text of the regulation and If you look at those they will cover most things that you would think of if you were thinking of AI and some things Which might sit outside the boundaries of artificial intelligence if you are looking at them from a particular perspective But it's an it's a broad definition and the reason the EU has gone for a broad definition is to try and make it kind of future-proof, right so to be kind of You know, it's not going to make a bet on what direction is going to become kind of commonly used or widely Widely powering AI systems, but we're just going to try and catch up everything Where does this play so obviously won't apply to the UK But it's going to apply to both public and private actors inside and outside the EU as long as the AI system is placed on the on the market in the EU or its use affects people in the EU so You know, if you're providing If you're providing an AI system and you're selling into the European market or giving to the European market It doesn't really matter if you're making a profit of it or not if you're applying it into the EU This regulation will apply. It's like the GDPR again in that respect in that it sort of makes these kind of extraterritorial claims about about jurisdiction and That means we suspect that'll have in some impact outside outside the the EU now the UK is also doing its own kind of AI regulation stuff There's a there's a recent AI strategy The CDE I's been doing work on road maps to to AI assurance So it'd be interesting to see how the two different approaches diverge and how they and how commonalities they share and things But it's all very kind of emergent emergent space Yes, and it's going to concern you if you're a provider a developer an importer a distributor Or a user of AI systems and it doesn't apply to private users So if you're building, you know, you build your own AI to do Do some process for you then then that that won't count and it doesn't cover It doesn't cover research not at least not explicitly So it's very much about that sort of application to kind of putting things putting things onto the marketplace So one of the key things that the regulation proposes is is to chunk Art AI systems up into four different categories based on the risk they pose to fundamental rights and to safety This is kind of I Think the thinking here is previous machine learning systems data systems statistical process or a Po have shown us the flaws when they've gone wrong The kind of the high-profile mistakes and this have shown us that kind of certain areas are just inherently more risky than others and require more Kind of attention from a regulation So if I start at the the opposite end here if I start with the minimal risk AI These are things that basically fall out of scope of the regulation They're things like artificial intelligence used in computer games or spam filters These are these are sort of areas where there's really really no risk to to rights and safety In the case of those the developers of those can choose to apply requirements from the regulation or adopt voluntary codes of conduct Or they could just not Stepping up from those are AI systems as seen as having low or limited risks. So these are things like chatbots and Artificial image generate or video generation. So deepfakes or systems that use kind of emotion recognition when they're interacting interacting with the person The only requirement the regulation would place on these is transparency. So you'd have to know that you're talking to a chatbot. So that Google example of an AI assistant that can book Head you know booked a hairdresser's appointment using natural language. That would have to say it was an AI assistant in within the The next category and this is the one that the regulation kind of deals with the most What seems categories of AI systems that pose high risks to fundamental rights and safety So for these they'll be a set of mandatory requirements on these systems before they can be put onto the market or into service Now there's a kind of a general catch or category of high risk But there's a list provided in our next three of the regulation which specifically spells out about 10 or 11 categories that we are going to inherently be high risk now for our purposes here I think it's worth saying the education is one of those categories because of the potential impact on people's kind of, you know Life progression and the effects it can have on them And the sort of the sensitivity of the the data sources involved in that So so essentially any machine learning system using AI is going to count as high risk It also includes AI where AI is part of like a safety system But that way I think we can move on from that And then then finally there's this category of artificial intelligence systems that are seen as posing such a fundamental and Unexceptful risk to fundamental rights that they are prohibited. So the examples in this category are social scoring by public authorities the kind of social credit systems Subliminal manipulation of people by AI purposes AI techniques and and most real-time biometric ID With some exceptions carved out for law enforcement. It's not a very big list, but it I think The idea was to kind of highlight some of the biggest problems Hi David, we're having some problems with your sound It seems to have frozen. I don't know if you can hear me at the moment Hello, David. Hi everyone. Sorry. We seem to have lost David for a little while. I think he might have been having Problems with his connection. I'm going to email him and just make sure that he can get back in Please bear with us. I know we've got a couple of really good interesting comments So hopefully David can join us again and answer those. Thank you Stay with us. Thank you. Hi everyone. Just Hopefully we can get David back Hopefully it isn't artificial intelligence gone rogue that has removed him from the session Hang on for a moment or two longer. Hopefully we can get him back. Okay. Thank you so much for your patience Hi everyone. Just trying to get David online Bear with us for a moment or two more. Hopefully we should be able to sort this out soon Thank you. Hi everyone. I haven't heard back from David. Unfortunately. I'm just going to give it one more moment Hopefully he'll be able to rejoin us But if not I'll come back on in a minute or two and just check Talk about maybe alternatives. I'm very sorry. We haven't heard back from David. So It's really unfortunate because that sounded like it was going to be a really interesting Talk about the artificial intelligence act And the recording of the first bit of the talk will still be available I'm going to make a note of your questions as well so we can pass those on to David and You know, hopefully we may be able to post up the rest of his talk at a later point but Unfortunately folks, I think we're going to have to end this session here. Thank you so much I'll let you get back to your bread making and virtual bread making that seems to be going on at the moment Thank you very much. Wait, wait, wait, wait We've got someone we've got someone back Hi David. Hello. I've had a horrible computer crash. So I'm on my phone, but I Can I can probably try and wing it if you want? Do you want to send me your slides? I could maybe share them on I don't think I'm going to be able to do anything with this computer for a little while. So I will just Talk freestyle Okay, I think I'd got up to the point where I was talking about the four different categories of AI. Yeah, and that generated a lot of interest. So I'm going to let you carry on. Thank you so much for joining us again Thank you. We'll see how it goes Okay, so I think that then the thing the what the regulation does for that high-risk AI is it creates a list of Obligations on providers of those systems and this is where a slide would be useful But they're they're quite there's quite a lot of them. They're things like An obligation to have sufficiently sort of high quality data to train the AI for the purposes that's going to be useful They are obligations like having An aftermarket monitoring system. So you as a provider know how your kind of Tool is working in in the real world There are obligations to provide adequate documentation to to potential users and that Documentation there's a whole annex which sets out the requirements for that documentation are quite substantial They include quite a lot of information on kind of the training data That's been used to train the AI Like where it goes where it's likely to go wrong, you know, it's kind of accuracy is accuracy Criteria, you know how you how you've tested it So there's quite a lot of potential information in those requirements and I think they're quite that's going to be quite useful people who are Users of AI systems or procurers of AI systems is that actually when this regulation comes into force You'll be able to look talk to a provider and say I I want to see this and I want to see this And they should have a legal duty to kind of to sort of show it to you What else So I think I will there are some obligations on Users of AI systems in this regulation and not as onerous as though the obligations on But they are things like using the AI system for the purposes of which it was intended following the instruction Reporting any malfunctions or serious errors back to the provider and using the information provided by the by the provider to Do a data protection in fact assessment as that's under the GDPR and I Think I would skip ahead to some of the kind of our take on this Which is which is that we are quite glad to see you know as a company that's involved in kind of ethical tech development We're quite glad to see these issues being put on the kind of regulatory footing We've been recommending some of the some of the things that are in that list of requirements for high-risk AI to our partners and our clients for some years now so things around you know being as transparent as possible And document you know adequate documentation Being quite clear about the accuracy and the confidence of your system Knowing knowing when it fails and how so we're quite glad to see those kind of things. We are conscious that It's quite a large list of large lists of requirements and that might be quite burdensome for the small business But we don't think this is going to kind of strangle Innovation in AI rather it's a piece of regulation where the intent is to You know there's one thing to have AI when it's kind of a fun tool or a fun app or You know an experiment or something social But when you're building those those technologies into the fundamental fabrics of society you're putting it into critical infrastructure If you're putting it into Educational institutions, then you know it has to meet a certain standard of quality of reliability It's worth The red lines could have been harder right the red lines around kind of the types of AI that are prohibited They're quite sort of there's not very many types of AI that have prohibited The exceptions for law enforcement activity are quite they're quite broad We It's I think it's interesting thing. This isn't this isn't really Talk about this in terms of AI ethics. It's got a relationship to AI ethics, but it is a regulation. It is law They do different things We have different purposes for them and they will They can't neither can replace each other what this is very much about it's not an AI ethics regulation But an AI sort of safety regulation and that's the language that the European Union is using is using language around safety Trustworthiness the regulation is going to be very strongly linked to Standardization processes, so it's it's it's about this idea of kind of reliability and rigor rather than is about sort of AI ethics Now there's over there's overlap, of course, but it doesn't touch on issues like You know what are appropriate Problems to use AI on what are the broader social cultural impacts of using AI and also different critical social infrastructure it doesn't touch on The ethics of being an AI developer or you know as a professional or a professional person deploying AI It doesn't touch on any of that kind of stuff and it doesn't touch on anything about You know the makeup of the industry developing AI who's included in that and has access to that It doesn't touch on any of those kind of areas. It's got this focus on on Reliability and crossworthiness and kind of if you if you buy a product an AI product to do something It should do that thing and you should have confidence that does that and it largely uses that through the mechanism of kind of transparency One couple of important other innovations are that It proposes it proposes some new regulatory authorities There will be these national supervisory authorities now They could be existing organizations with with new powers or they could be entirely new organizations So it'd be interesting to see how those emerge They will have powers of kind of market They'll have a market surveillance role to kind of try and understand how how AI is being being used in their country And they will have powers to take AI systems off of the market. There is a sort of a greedishly breaking these breaking these laws They they act to give these gives these organizations the power to access Documentation and data Necessary to do their enforcement role. So that that could be quite impactful And the fines that's probably of interest to people and the fines haven't been agreed or haven't been confirmed yet So, you know for different violations of the law, but they're going to be in the GDPR ballpark Without my slides, I don't know what else I can say that so maybe it would be good to kind of if we go to some questions If there are any questions and yeah Mm-hmm. Oh David we've lost you again. Oh no Hopefully you can join us again. I the first one I want to come to It's gone it's frozen again, let's see Hopefully we'll get David back to answer some of the questions It seems to have frozen at the moment. Hi, David. I know how to use computers honest You know while you're away, there was some chat in the comments about this being some kind of malicious AI Attack that was taking you online. I'm getting basilisked from the future. Yeah Okay, we had loads of questions actually and comments that we lost some viewers because I nearly cancelled the session before you Come back. So we probably would have had had more if I hadn't have done that. So first question is from Matt and He's wondering. I think addressed it in the early part of your presentation But how long do you think it would be before a similar act would be adopted in other regions? So If I was to use the GDPR as an example that that did become a model for various different kind of data protection regulations around the world. So it got some traction in our recent states the Legislation of California it has kind of progressing through has has quite a strong parallel. So It'd be interesting to see like how it how it works. The UK is sort of doing its own thing I think it very much definitely has its ambitions to do its own thing and does not want to do this So it you know, I mean, that's that's a definitely big area politics So five years in the UK in the EU then I guess if you people would watch and see how it works Watch and see what its kind of impacts were and then I think the channels would be if people are having to meet this standard You know, there's quite a lot of then they've already met that standard So then they're kind of okay to sort of push that into other other parts of the world You know, if you're outside the EU and you can you're buying a made in the EU AI product You can say well, I know you've got this documentation on its accuracy And I know you've got this documentation on where it fails. So can I have it? Yeah, so that's sort of Extraterritorial impact even if there's not a law And just on that topic, is there much of a Have you noticed a reaction from industry? Is there any kickback from So I think from industry There's the user is running a consultation exercise so all the kind of people you would expect to be responding to that consultation exercise are You know their comments and their perspective their position are available on the EU consultation website Who knows what the background discussions the discussions are and I think I think industry The problem that some small and medium enterprise face with the GDPR was that they were kind of saying This is an instrument for large industry This is a this is the regulation that they will be able to meet because they've got compliance departments because they've got You know dedicated lawyers who can kind of work on this and do the same So there might be the same kind of situation here I think this if you are if you're a professional enterprise building building Can you know enterprise grade or an industry grade machine learning systems? You probably be able to meet these requirements. I mean I think the challenge will be for the the smaller the smaller companies and or you know University departments or things like that We have another quit we've just we'll take these questions because we we lost a bit of time, but This was more of a comment from Sarney on I think it was to do with your slide about the four areas of risk and deep fakes were in the low-risk category and She was quite surprised by that. I was surprised by that as well. I would have thought yeah, so I think I Would I would think that I would see that not as like the social problem of deep fakes, but deep face as a cat You know official image generation as a technology, right? So we'll have multiple different kind of uses You can use it to impersonate somebody on the internet or fake Politicians saying something they shouldn't but you could also use it if you just needed 30 new faces for an Advertising campaign or something right you could there are other uses of it that are not like socially problematic And I think if you were using that technology in any of the other Areas in the high risk, then they're not it becomes high risk So it's I guess that it's it's there is some sensitivity to context of use there, but I yeah, but it's it I was surprised at first when I first saw that as well actually, but I think that's the justification for it And our final question is from Matt So thanks for fighting through David. Yep. Absolutely. Well done He wants to know if there's a lot of hype around AI and is the act realistic Yeah, and I think that I think there is a lot of high-ranked AI. I think there's a lot of things that are That that that a build or sold as AI that turn out to be either Some variants on the kind of mechanical Turk situation where there's somebody sat under the the table playing chess You know where there's a lot of human work going on into making something something that looks looks like an AI Or there are areas where something is basically, you know, some variant of statistics You know to generate some kind of insight which is useful and works But it is not quite what people imagine when they say I AI I think the act is quite grounded In that it's not it's not banning artificial general intelligence or anything It doesn't have anything to say about those things. It's very much focused on kind of Applied artificial intelligence systems, you know put in place in important areas of society Which like are being done Yeah I accidentally hit on another question. Sorry for confusing you if that had a flashed up This is going to be the final question and it's from Vicki and it's just questioning why education is in the high-risk Area in terms of yes, so there's there's I wish I have my computer because there's a there's a Recital to the regulation which explicitly says Why? education is high risk and The the elements of education that are pulled out as particularly high-risk are Automated assessment and application processes So I think they give you a good a good example of why This could be potentially problematic, you know, and those are the areas that are gonna have a lot of impact on people's kind of You know progression through education, but also kind of you know, the Afterwards, you know if you can get on to a cause or not because an AI has decided If you can get access to a particular educational institution or not because an AI has decided that's you know obviously very contemporary live issue but I think obviously, you know Pressures to scale student numbers and things like that to kind of use kind of automate systems for assessment You know, there's gonna be a lot of kind of push to use these things So I think I think just just because of the fact that it's it's an important area, right? And it's kind of socially important and Yeah, high stakes for people's lives. Yeah, absolutely Okay, so I'm going to thank you David for battling through some really Difficult circumstances there and joining us again. And if you hang on for a moment, I just want to talk to you about about the presentation, but I'm going to say thank you for everyone that has joined us today and and thank you for your questions and comments and bearing with us See you at the rest of the Event. Thank you very much