 Hi So I've been totally looking forward to this but I have to admit also I was just at another conference I just literally this is my second 9 a.m. Keynote in a row and I haven't really figured this out I was a professional programmer in the 1980s, and I loved it. I absolutely loved it And I but I didn't have a great social life And so I decided to go do a master's degree, and I'd always been interested in artificial intelligence And I wound up here, and I haven't programmed professionally since 1998 And so I've never been to this kind of conference so but I also have been completely disorganized I don't know what I'm doing today, so I'm planning to spend all day at the conference if anyone has its suggestions about what I should Go do maybe the sprints mentoring. I don't know what I'm going to do, but anyone come up to me. That's great But I'm going to go I'm going to try to go pretty fast through the talk and hopefully we have a big conference Conversation at the end because I do mostly now talk to academics To policy makers some of whom many of whom are clueless a few of whom are smarter than people make out And things like that. I love being back with the people who are actually building the software and getting you know Feedback and finding out if I'm behind on anything So I really hope that we have 15 minutes of conversation and you guys tell me what you think But this isn't all that's going on I don't even know how that's supposedly four different ways of getting in touch with me But let's face it. We're mostly still on Twitter. Even though we don't want to be or something. I don't know Let's talk about this is AI product or a person. Oh, let's let's ask you guys. How many people think it's a product? Anyone think it might be a person sometimes So this is why I like being in these kind of groups. Okay. Sorry. Sorry like there's yeah You're just like one person. Okay, but but a lot of people think I hate robots because of some of my papers And I love robots. Why do you think you go and do a PhD in AI? Like I love these are pictures I took right that I was a meeting these these guys were building this robot, right? But of course it's a product. It's something that's built that's right. It's replaceable Replicable this was academics But the vast majority of my eye is of course coming out of its extensions of corporations, right? You've got a piece like a bit like a Microphone and a camera and some intelligence of a corporation in your house if you have one of those speaker thingies, right? Or of course if you have a mobile phone, right? So yeah, this is a product and this is a person. Thank you All right, so I'm going to talk actually it's a lot of words in this. It's not that many pictures in this talk Well, yeah, I forgot I have some theory part at the beginning. Okay So what even is regulation a lot of people hate regulation They're like, oh, why are you regulating us and it's like what are you talking about so I come from biology, right? And and like gene regulation, right? It's like how we persist actually I think there's a slide about this, right? But there's up regulation is down regulation This is one of our papers about looking at The fact that even things that look like they might be bad mutations, right negative things that that they're Recoverable and they're part of the process of innovation, right? You know, this is this is biology, right? So regulation is just the means by any by which any complex entity perpetuates something into the future That's basically it right you all know this thing about that your cells are dying and in seven in seven years You don't have any of the same cells It's not exactly you and you look older and and countries look different and and and companies look different But there's something Recognizably like you in the future and that's what we're all working on and governance is when we do that explicitly So I just showed you like nobody is sitting here trying to plan their gene regulation, right? But we are planning like what we're good when we're going to eat next and that's a part of regulation You're trying to regulate your weight breathing is regulation. These are all regulation so So when you cause something explicit and again, I love doing consciousness But I only have a half hour here, but you bring me back if you want my consciousness talk It's really fun But anything that's spoken or written about right? So I actually therefore if you by this definition I don't have any trouble saying that AI is conscious I just don't think it's a moral Patient if we back it up Right, so corporations and other societies including families everything we all self-govern of course we self-govern Corporations must self-govern they wouldn't be here tomorrow But we also coordinate so we have geographic interests I mean talking about this in the Czech Republic right now is extremely evident But we always have problems like and even before the COVID I would use this example like if your kids are vaccinated It doesn't help that much if the neighbors kids aren't vaccinated too, right? And there's this whole thing about the air quality and water, you know Can you even get clean water? These are all things that you coordinate, right? And you get your nation to take care of these kinds of things It's like we I a lot of people have this idea that government is some like alien agency But we constitute that this the nations that we compose and the nations that are composed by a geographic proximity Government is the way that we coordinate So we should constitute our governments to we should realize we have an obligation to Reflect the interests of our communities through our government, right? That's part of what we're the model I'm trying to communicate here So I just said that Right, so that's what we're trying to do all right, so Governments both provide up regulation like I don't think there's a government anywhere in this continent That isn't pouring money into the digital economy, right? It's just They put they give us lots of support, but they also restrict us but restrictions again if you learn AI you learn this too even restrictions Can help you because it's like, you know, literally it's not that you're just searching under the lamp post when you're looking For your keys. It's that someone shone a spotlight where the keys mostly fall Right, that's what that's what good governance is like now We all know that can be bad governance too But this is why we want to work with our governments to make sure that the spotlight is in the right place, right? So that's the kind of model. I'm trying to describe here so now let's talk about what the EU is specifically trying to do about AI and Oh, you know, that's okay. We've got plenty of time. I don't think I included some slides Let me think yeah, I didn't I'm gonna just tell you this a lot of people think AI is only happening in China and in The US and that's because these people made this sort of misinformation figure where they looked the biggest companies Right, but a well-regulated Economy doesn't allow really large companies that that the governments can't control. Well, right So if you just we did a paper called is there an AI Cold War? And we showed that the EU is actually comparable to China if you look at how many companies have at least two patents at the national international level In AI it was one category of AI model-based AI and And also if you looked at the market capitalization of those companies, right? And actually the rest of the world if you exclude China and the US and the EU the rest of the world combined has is about the same level as China and the EU combined but the US is dominating on those two metrics But that's partly about the metrics too, right? But anyway, the point is there's tons of AI in Europe and a lot of it is as you guys will know It's embedded in like conventional industries as well. It's not just that we have like big superstars, but there's you know It's everywhere. Okay, so Yeah, so this is all the People talk about the AI act a lot, but look at this the EU is really busy and again, I don't know if you guys know what we have here It's like this incredibly lightweight bureaucracy. It's like one of the cheapest for the amount of stuff. It does It's one of the cheapest bureaucracies in the world You know because it is that extra layer so we have to pay for our governments to and then we're paying this extra thing But it's pretty smooth so anyway all of these things Are the people and I'm not gonna talk about I just sort of talked about but in this talk I'm not gonna talk that much about the Digital Markets Act, which is about basically this market dominance by small numbers of actors mostly foreign But the Digital Services Act I'm going to talk about in the GDP are in the AI act. I'll talk a little bit about And I could talk about liability if you guys want me to and the other stuff I'm not even that involved in the more financial end all right, so The GDP are like if you think about what is it that what I keep so I don't know if you guys know I'm an American and then I moved to Britain because it was cool I and but then when I was in the EU in the UK in the EU I figured out that the EU is really cool, too And so I took a British passport specifically to have an EU passport It was in 2007 but anyway, so so when I was Start and I and again, we're regulating AI in the EU We're not regulating it very well in the UK or the US so that it kind of starting right But when I thought about what do we need to do? I mean I was trying to figure out about Google since I did a PhD at MIT and a lot of my friends were in Google and I Was interacting with them and you know and I was just like what is this new entity? You know, I didn't learn about this in high school civics What it what it and also I think that about like the Red Cross and these you know various NGOs like this Europe Python, right? What is this new community? But anyway Google had a lot of power and it became evident that the government was looking at that and I'll show a slide about that later But anyway, a lot of the things I was worried about are in the general data protection regulation Which we've had as a law now for like five years, right? So making sure that there's valid consent we all know that's in fact I hope you know that that's been gamed, you know it's supposed to be in the law that it's just as easy to to reject as to as to accept and that should be on the first screen, but also that They the Parliament they anticipated that not that much software would pop up those windows So a lot of people are popping up those windows like why do they even need to keep? Personal data right the idea was that like don't do that Right, but a lot of the the big corporations were deliberately trying to make the EU look bad by making it really annoying having these really crappy windows, right? But anyway That the transparency requirements the the fact that you can correct data that that's wrong about you Which actually become interesting discussion about that some that I that's one of the things. I'm not crazy about I would I would like that. I like correction, but I don't like that the deletion Because you need to know why you think something right it's your externalized mind So I'd rather have the corrections labeled there, but I made data portability and of course the automated processing That's like one of the fundamental things if somebody decided something about you using a machine You need to be able to see why you need to be able to contest it, right? Okay So the digital services act is really super interesting too. I don't know if anyone who here is like worried about the digital services act Like no one. Oh my gosh you guys So the whole point here is that yeah, we want a safe predictable trusted online environment And so we have to defend our users, right? So it's basically about three things which again are the main Threats of AI if you think about how societies are being manipulated, which are these stupid recommender systems Sorry, how many people work on recommender systems? Okay, sorry But I personally I think that like you know Twitter and Facebook both started going south when when when they took away Your capacity to just see the people that you were curating the people you followed, right? And now it's like it's pretty random anyway, so it's not random that would be better Anyway targeted advertising obviously that was a big deal particularly in political advertisements that because we have all these protections in order to try to keep our government good But they're based on people being able to able to check if that if the political advertisements are honest Well, but if everybody's seeing their own created on the fly Political advertisement, how can we possibly police that right so targeted advertising is a problem anyway It's also a problem for like gambling addicts and things like that But it's a super a problem for democracy right and then of course profiling like how did you decide to serve me this? Advertisement what was the information you had about me? Why can't I transparently find this out? There's this thing with Facebook some some so they started review They said okay, you want to know here you click on this thing Why did I see this ad and it shows you three reasons you saw the ad and some researchers at Princeton started taking out some ads? And they figured out that what Facebook showed you was the three least informative things Right so the thing that actually really narrowed you down You would never see that but it's like you know But you almost always see like your gender or something right because like half the people You know are getting focused on that or not depending your gender of course, but you know it's not But they literally were deliberately choosing that and revealing so that's the EU is trying to stop that they're saying no We need something clear So um, yeah, I talked about this already so Yeah There's something weird going on with targeted advertising and I don't understand it I'm not an expert on this, but some people claim no one's making more money. It isn't actually helpful Everybody thinks they want to know about their their customers I find this really problematic if it's true that all these people like everybody switched look at this Everybody switched right everybody switched using Google and Facebook. This is this is what I mean people say like AI robots Are going to take our jobs. This is like the main one of the main industries that we destroyed jobs in not by making better news But by just taking the entire revenue model away from journalism, right? So anyway, so everybody switched they all want this thing and yet they're saying like oh when we turn it off We don't seem to lose any money It's not clear that it does a better job of selling stuff So, yeah, I find it hard to believe that this kind of massive shift is just is just trend or fashion So maybe the numbers and economics haven't been worked out correctly Maybe somehow it really helps with your innovation to add this information about your customer base or something It's also possible that there's some kind of external thing and speaking about those slides You may see this as no for and it's a little hard to read because I guess that somebody has decided you can put this on Wikipedia even though it says no for but anyway, this is one of the The Snowden slides And and so we didn't know that these people had all signed up to this until Snowden and and Google didn't know that they had been hacked by their own government You know, they weren't happy about signing up to this. They were they were the third held out I guess but they they But they but they were really unhappy when they found out from Snowden that they had been hacked, right? So something weird is happening. Let's get down to the AI regulation though. I think it's almost like a decoy. It's so boring Okay, so yes famously we're categorizing AI into stuff that we ban in the EU because that's not who we are Right, that's the theory right so that includes social credit ratings Which incidentally China has also signed up to the UNESCO thing that says we should no longer do social credit ratings So but also it's about biometric scanning So it's okay that if your passport has a you know knows who you are and you go and you can check your passport You've consented to having a passport or whatever But it's not okay that people are tracking you as you walk everywhere in the street and they know exactly where you are Which incidentally again, I've heard is true of cars at least in Britain There's so many cameras in Britain that they know exactly where all the cars are and Which a lot of people are associated with cars anyway, so So, yes, there's things those are things that are banned and then the vast majority of AI although again That's what they said about GDPR with the with the things But the vast majority of AI is supposed to be no problem at all And then there's this little category which is high risk and that basically means if you have a product that can change people's Lives, okay, I'm enough of a geek and enough of a psychologist that I know every product changes people's lives But what they mean is things like education welfare opportunities Whether you get a loan or not Medical stuff those are the things that are in this high-risk category, but then what happens to if you're in high risk I already talked about that um is You just have to basically do DevOps. It's really weird I mean, this is like something that every other product you already have to do Right that just for some reason software has been excluded from normal product law People were arguing that software is a service. It's not a product and although some people argue that services are also products I don't even need to go there in my opinion Software is definitely a product it can provide a service I think people get a little confused again That's the AI thing if it's providing a service But you sold it to someone to provide that service or you built it as a part of your corporation to provide that service It's still a well It's actually when you sell it to people that it becomes a product, right? So anyway people don't like the idea that software is a product partly because it's so Agile right that that the that the library is changed out of under you and they're saying how could you have a product where? You know that the you know What if you were building a bridge and you had concrete that collapsed and you said well This isn't a product because the concrete collapses all the time right now. You can't use that kind of concrete, right? We're in a situation. We're building a lot of software now as a central infrastructure Right, that's why people are trying to figure out how to hack it right you can't run a city anymore without software So you can't build it the concrete You know the libraries have to be something that that even if somebody does something really weird and something goes away The city doesn't fall over right does that make sense? Okay, I hope so So yeah, a lot of people a lot of people the EU said the compliance is going to cost nothing because you guys already keep these records, right and And and and then some like American think tank that hates the you said it's going to cost every company Especially the little ones two million a year and so a colleague of mine. She's actually Mary Hattie. She's super cool and she is finished you and so it took me a long time to pronounce her name, but she She really did most of the work But we went through all the stuff and she had a lot of experience in SMEs and also a little bit in in like you know consultancies or whatever and It came it came out to somewhere between like eighty and two hundred thousand depending on the kind of company that you're running So compliance costs if you're not used to doing compliance then, you know, obviously, this is a new thing that goes into your into your Bottom line right but I was just at another conference kind of like this one only it wasn't pure programmers it was something called Euro chatbot and a lot of the people there were for things like banks it turns out that like a lot of the people with NLP they're trying to reach out to people to help them figure out things like servicing their debts taking care of the health care There was a lot of NGOs or trying to help people, you know connect into welfare services Whatever anyway, if you're in a normal company like like a bank you already have there or you're doing a financial There's a guy in an SME. He said I have like, you know, 40 people working for me Every year I personally do the compliance. It takes me two weeks It usually improves my business when I go and I rethink about that, which is again That's that's good governance So if you if you if the stuff that you're doing compliance on is actually helping you think about how to do your business better I said and then I spent half a day on the GDPR I do not know what these tech companies are complaining about it's nothing, you know, so that I've heard that for a number of people so anyway Of course, you don't hear that from from pure tech companies because again, it's the first time they've had this particular burdens That's really shocking to them. So anyway, it's not that expensive and The the one thing that every piece of AI has to do is that you have to make sure people know they're working with AI Okay, now this seemed like a no-brainer to me. Although I knew people were faking it I mean, I was in America for a little while between 2015 and 2019 my partner was working for Princeton and I was like You pick up the phone and I you as an AI expert I was not sure if I was talking to a person or machine, but they call it lot And so I started hanging up on a phone on things that might have been people, right? And then it was just wrong, you know So anyway, this is a part of the of the AI act and the year old chapter had people freaked out Nobody had told them and it turned out that it's hard to say if you're talking to a person or machine because what's going on is There's real humans and they're sort of orchestrating a lot of sort of bot-like things. You know just X, you know Problem solvers for a particular context. So you think you recognize Oh, this person's having this problem with their password or something and then you shunt that bot to them So it's like a hybrid between a person and AI all the time when you're having these online conversations So they don't even know how to say that that'll be I'm sure that will shake out in like a year a lot of We're all geeks, right? We want things to be binary. We're like, oh, is this gonna work? It's not gonna work a lot of things in the law. It's just like, you know, just give it a minute. We'll figure it out Lawyers are actually really easy to talk. They're surprisingly like hackers, right? They make things work They're like people who really, you know get stuff done So anyway, but it is an interesting problem. We're gonna have I would have preferred a lot of people say famously the AI act is Proportional Risk-based and that was supposed to be speaking of binary that was supposed to be a continuum But unfortunately the more conservative wings apparently I've heard two stories about this Insisted that there were these levels, right? And that's I've just told you there's actually like four levels that there no problem It's two levels But I would I think that For the rest of us who are not necessarily classified as high risk It would still make sense for us to so partially comply because again, it's just DevOps With some of the requirements these reporting requirements and just think about them. So Again, I you guys are programmers. So I could probably go through this part much faster A lot of people. Oh, I think this is one of my famous bikes So I'm the person I was one of the people in 2017 that publish that paper that showed that AI is racist and sexist If you use machine learning and it's exactly as racist and sexist as the people whose data you've Processed because that's the whole point you're learning what they do You know and what they say not what they think they say, but what they actually do Okay, so what this is I don't know like this is finished again, but it's not my friend. I've just mentioned somebody I don't know from the internet, but in our paper Eileen Kellis can did this with Turkish so Turkish and Finnish both The he and she is the same word okay, so in in Finnish it's Hain and The point is that how it got translated into English depended on the rest of the sentence All right, so this is just a standard thing you can do you can play around with this for with Google Translate yourself. Okay, so it isn't that Google translate itself is sexist It's that it just is more frequent to see she next to laundry than to see he next to laundry, right? It's just showing you the probabilities. That's the way the language is in English Okay, there may be some countries where men do muscle laundry and then it would be he in that language, okay, so So anyway people are very upset about this and they want to fix it and and so what they wanted and this was oh That's that paper. I just told you about right Yeah, so oh geez I can't remember my own talk. I'm sorry. Look first. I want to point out What it's what I did this I just said in this talk There's this thing called wheat and it allows you to see how racist or sexist your your algorithm is But there was this other thing called we fat so it was so wheat was the word embeddings association tasks test and I forget what the effing is for this but but you take you take your rating of how well associated a word is with being very male or female or or or You know various races or whatever your your axis is you put that on the y-axis there And then you look at actual data So these dots here the left is the thing that was producing all these these sexist comments the The bottom is the labor statistics or the census about who has nate so this these dots are like the first names English first names like Alex and Chris and you're saying like how many of these people are male and how many of these people are female and then the dots over there are The names of jobs and unfortunately some jobs had two words So if it was like domestic engineer, we just have engineer there. So that's probably there's kind of the noise These are huge correlations. I mean this is with the 1990 census. It was the last us census That had the information about the first names in it Right so that was the closest we could do and notice this is the English language web But I I was just at a meeting where people are freaking out that the English is so dominated by American experience You know, this is if you're using these chatbots They're gonna start biasing your society to be more like America, right using the English ones, right? Because this is that it's all reflecting the norms of our experience because there's a lot of American text Anyway, so enough about that. I already sort of said that Back to this translator problem. The point is that Literally the first drafts of the UAI acts nobody's perfect. It said unbiased data. There is no unbiased data In fact by the old way we used to talk about machine learning every regularity is a bias and that's what you're looking for That's knowledge Right, and it's also knowledge that it turns out that mostly women are doing laundry like that's something we can mine And it's actually a fact, right and and we might want to correct it That's what consciousness is for we can choose new targets, but currently it was mostly women who were doing laundry, right? Anyway, people saying let's have unbiased data if we just use a simple transparent easily auditable Algorithm then we are going to get this Stereotyped output that replicates our lived experience. All right, but we can choose if we want to what a fair output looks like Now this is super hard. Again, you think fairness is easy. No, it's like say you want to have Again, not just a two-gender if if you want to have the equal number of men and women in some kind of position and Then you have to make a decision if that's what you want And you don't have the same kinds of test scores and using tests then you have to change the Opportunity so it's harder for either the men or the women depending on which ones are currently dominating to get in so you can't have Equality of opportunity and equality of outcome unless you started in an already fair system, right? But once you've chosen which fairness you want to apply then you can either do something where you have some human readable hacks You know like there's lots of ways to just write rules and say let's make this happen like you know when you're when you're doing text prediction and you choose not to Express bad words, you know, that's very easy to read or if you're gonna do something complicated And you want to they have a second step of the algorithm now. I Used to think that I So I wrote this paper and we said we think it's really important to have this compartmentalized And yet almost everybody who cites that was all trying to warp, you know Distort as if they could choose all the isms the original Learning thing and I was really worried the reason they were doing that was because if they could get rid of the sexism They could also make you more likely to go to the people who are their advertisers or whatever things like that So I thought that they're just being mean Bad there. They're just trying to hide things when we give corporation power. That's what happens That they could they could be distorted for good reasons even but to do bad things But it also turns out, of course, it's a little slower to have to have these two steps But I was just at a meeting about generative AI in fact where the American companies are Starting to realize this is a problem. They're saying it might be worth doing two or three steps So we can justify each of them. They're just starting to see the compliance people are coming to them So anyway, so that this whole thing is a translator and don't freak out about the fact Another thing that bothers me is people say we're not even going to train our system on corporate that includes, you know Bad language, you know think a derogatory language things like that if you don't do that I mean, who is it that's producing this derogatory language quite often It's disadvantaged minorities and you're saying this AI system won't be able to understand them, right? So I don't think we shouldn't have moral panics about what lines up in that first box We should worry about this bottom box about what the company Allowed out as expression, right? But what it comprehends well, we want us to understand our world, right? So yeah So if that's the whole thing is a translator It's not just that one little box is a translator and the point of the DevOps is that every stage of this should be audible and replicable I mean, this is news. This is news you go talk to a Lot was news a few years ago now that AI acts reflects it that you know because the because Microsoft and people were were used to helicopter into meetings like policy meetings and say You're gonna lose deep learning if you start trying to regulate because we can't explain what each of the weights does Nobody goes into a bank and says tell me what all the synest is that are doing in each person that works for your bank Right that is not the question The question is how did you know it was a legitimate product that you could release on the world, right? And so that's about like, you know, I'm telling these guys about like build to test, right? Yeah, like that we've been doing that since like 2000 Right, that was like when the one of the agile manifesto came out, right? So all you have to do and this is news not to you but to these people like somebody architected the system Somebody designed it now. We don't know you guys know better for your own companies than I do AI companies tend to be way worse at this the normal software companies about whether someone saved all these papers, right? but sometimes somebody architected it and there's logs there should be revision control logs again telling them that there are these things is is like this big insight and And There they're shots of the testing logs And we also once the system is active if it is an AI system You often keep track of the inputs and decisions not too long because the data privacy But if there's a car crash that so you keep it like for like 24 hours or something, right like a camera Okay, so the point is that all of this is for the benefit of developers as you guys know, right? That's why we started doing I mean I remember when revision control was a big deal. It's the 1980s Right and we're like wow, this is cool And then I started using it just for myself so I could keep track of my old versions of things, right? It's like but but it's also Potentially auditable so both the digital service act and the AI act assume we should be able to audit But you can talk to these consultancies. They say oh, yeah, you guys say that but you go into these companies Those files are not there that that record is never there. It's negligent Okay, again any other product if you couldn't prove you had done the right thing Then you're held liable for any problem made by your product anywhere near it You need to be able to show that it was the user's fault or somebody you know somebody hacked into your system You need to know if there's weird behavior. Is it because of your software? Somebody hacking it or your user abusing the system. That's the kind of thing you need to be able to talk about So anyway, like I said that ordinary product law presumes a capacity to prove that you've done due diligence Followed best practice avoided the worst practice and that includes documentation that most of us do all the time Unless you're an AI company came out of the physics department. You didn't take CS 101, right? Or systems engineering at least which is sometimes second year. Okay, so yeah Yeah, the end of the so there's another question, which is who is going to do this So the consultancies they make their money off these financial audits So they won't touch the people that are already financial auditing But they they're talking about this like a some kind of risk assessment Like they do that for people that are trying to move into a new area and like no you guys also do cyber security Why aren't you building up from your cyber security because in the first place if the system isn't cyber secure then everything else You know, who knows what these records even are, right? So these guys are not thinking about it, right and I do really worry about this We've written all these laws. We said, oh, we can make the world a better place Are there people that can actually check the audits, right? There's two different ways to increase the numbers of people one is by training them or something but the other is By making our software more transparent merit comprehensible to to other people so making sure the documentation is clear stuff like that So, yeah, I feel like I've talked too much about this compliance part But it's getting excited about the geekery of it and most of you don't look like you're asleep yet So that's quite impressive but the point is that Okay, I will say this the DSA tells really large companies that they have to think about what the threats are and They have to come up with a way to address them and then the EU is just going to check their work All right, and again if you have a computer science degree, you know theory of computation It's easier to check whether they did something right than to do it in the first place In theory, right so one thing to remember is even if right now we don't have the Enforcement and audit capacity and there's been a big problem the GDPR that it has not been enforced adequately Because we're talking about digital records It's still worth doing the right thing because in the future Maybe someone will do this with AI or something right so we could go back and do a lot of older work All right, so how much time have I got left? I don't even know who's my chair I've told me it's including questions or no. Oh, jeez. Okay. I wanted to be in conversation by now Okay, how many people want to see the actual AGI slides and how many people want to start arguing already Okay, I don't know okay first AGI slides Okay, are you all ready? Okay, okay You lost I'm sorry. I was just this is a deep-mind person at the last meeting. I was just that she was talking about And they did a pretty good job They've really talked about all the different kind of harms that she could think of like, you know Like including things like treating AI as human like And then her example was something like oh, you know if they say how many legs do you have it says well You know I'm a software system. I don't get to have it legs or something, you know and and you know whether malicious uses and then the things we didn't talk about discrimination hate speech and misinformation and Then she said well, I've looked and I've read the I've read the literature and there's only benchmarks on the first two So maybe the other four aren't things we can police Look benchmarks are not the only kind of policing right and I just told you this stuff before that So anyway, oh, yeah, I know I was gonna do it this way So I mentioned before about this we fat thing that the point is that Yeah, that this is stuff that we could choose sorry Even like Steve Colbert figured out. Okay. The problem is that you change your system on the internet, right? They're done and they all treat the internet like it's some evil thing. No, there's lots of real information there It's not the internet. It's the whole problem so Yeah, I mentioned this before even if we're only worried about biases that They could come from negligent engineering practices adversarial sauce on the system Deliberate engineering you could have one person in the company that's deliberately trying to disadvantage somebody else It could be the design There's this thing in Idaho about that like you know a new government comes in and they don't like putting a lot of Money into welfare and so they just say okay We're going to use a machine learning system that gives less money for disability, right? So yeah, as I mentioned the audits thing so anyway back to her So that you know that she's like what can we do when there's no bet tracks? Well, first of all take responsibility for your product caught a product, right? Make it don't say I don't have any legs say you're interacting with a digital product of deep mind, right? There's a totally different tone to that So yeah practice transparency early you don't have to wait for the AI act to come into place if you really want this stuff to work so I I got into AI ethics because I was working on this robot and You know someone's posed it to look like the thinker, but it didn't work at all Okay, at this point that it turned out that the CPUs weren't properly earthed or grounded if you're American, right? So it was like it was not working was basically a statue and there was all these other robots around that did work But people would walk up and say with that particular robot It would be unethical to unplug it and I'm like well, it's not plugged in and they're like well if it if you did plug it I'm like well, but it doesn't work. Yeah, people really want they want they were proud. They said We know that I'm like why you're so sure it's unethical to unplug it in and unplug it And they're like well because we learned that from civil rights and feminism, you know the most unlikely things turn out to be human Yeah, so you think that a Woman is as much like a man as a pile of motors is like a man, right? That's like weird Okay, so anyway people really demand to have things that the AI is something that they can fall in love with Best friends partners spouses equals But they'll have complete dominion over it right they can turn it on and off they can buy it now People have wanted to complete dominion over their partners for a long time. Unfortunately, so I really think this is a feminist issue Anyway, an issue of human rights complete dominion is not partnership. Alright, I Have a conjecture that part of why there's a rise of this transhumanism idea that these two things are affecting us very differently Is that we over identify with the artifacts partly because we feel like we're losing ground And so we really really want to think we can expand ourselves through our through our coding but Can't actually talked about this and I don't know how much I want to get back to the Q&A Kant was saying Look So I don't know if you heard about this Descartes apparently not Descartes himself, but parties of Cartesians They would light a dog on fire and say isn't it interesting that they simulate pain Because they were sure that emotions had something to do with the divine connection of humans, right? And the animals didn't have this so Kant apparently didn't like this and he wasn't sure about how God felt about dog Sorry how God felt about dogs, but he was sure that people that were bad to things that reminded them of humans Were bad people so he said look the dog can't judge you so I'm not talking about that But if you're cruel to animals, you're also gonna be cruel to men. So therefore that's wrong Okay, so people therefore take that to mean we have to be good to AI give it rights things like that But I would say that's wrong because there's a lot of AI that nobody identifies with so they're there It's only special pieces and also there's this huge problem about if that the the if we hold AI to be human like There's no way that we can dissuade it with the law Like anything we put into it that says oh don't break the law. You don't want to break the law You could take back out again So I think that since we know that AI can be an ethics link we should work really hard to build AI that we don't identify with But anyway, this is again these are this is how how the law works. I'm gonna skip that right now. Yeah People really really worry about her robots feel and I mentioned before I'm happy with the idea that AI is conscious Since you can use it to report its own experience fine That if that's what you want is a definition definition of conscious But if you're worried about the phenomenology of being like yeah turned off and then turn back on again or whatever I can't tell you exactly. I think there's a little bit of phenomenology there, but it's nothing like as much Similarity to us as things that we eat Okay That cows feel almost exactly how we feel you can hear them screaming when you take the calves away from the cows Which is part of the dairy process not to be evangelists for veganism. Maybe But the point is they they have the same feelings we have you know fruit flies are going to have more Similar feelings to us then the stuff we build out of digital stuff so This is maybe not the end of a GI you wanted which is the this is the end where people really think it's a person The other end where it's the thing that's taking over the entire world or whatever That's what I was talking about before when I was talking about governments and companies about how do we keep? control of both of those and Then finally this thing about super intelligence learning how to learn. That's us for 10,000 years we've been taking over the world and that's why we have a climate crisis right now, but we are learning to learn And so that's what I hope we're doing here. So I've talked about all the pieces. I wanted to talk about AI is an ordinary product And we ought to make it audible law it isn't that law has to keep up with technology The whole point of due diligence is what we write in our trade books and what we say on the stage of meetings like this That becomes due diligence that becomes the state of the art, right? So so we keep we get to keep making this stuff better and logic says that one company doesn't get to ignore What every other company has decided is is a good way to to maintain a stable? sector Okay, so and especially and I find this again at the URL chatbots people were looking forward to the AI act because They were the people that were selling natural language, you know parts components into software They they thought we can now also send in sell in the way that we audit our systems as one of the as one of the Services that we provide people who buy our software so it plugs into the AI act into the audits So I hope you feel the same way AGI isn't a threat as a category error Negligence and abuse of power are the threats and I said the treats. Oh my god. I can't believe I said that sorry the threats Okay All right, so AI is our product and and I go back to the picture. I'm sorry. I Love this picture there This is a guy still in versus the smog master. Sorry about that. I forgot to change the last slide So let's have the the people put up their hands and try to get me not to do a GI They should get to have the first questions. Go ahead. Yes. So if you have any questions, please walk up to the mic and We still have about two minutes to have some questions Sorry about that. Oh, thanks for a great talk. Oh, that's good And I was just thinking as developers I think we're pretty prone to think of law and regulations sort of as source code where humans are interpreters instead of computers and While with source code, but it grows in complexity over time We at some point have to refactor it and yeah reduce the complexity and While sitting on the sidelines and watching all this regulation coming in place It feels like it's sort of an append on me Operation it grows in complexity over time. I'm just wondering When is this growth process ever gonna stop is is there built in any sort of incentives in the system of regulations to reduce complexity when it comes to a GI regulations and such Okay, I I won't I'll answer most of that question without the AGI part just just because it'll be a little more expedient There's some people behind you. Um, yes, there's a lot of stuff for that So for example, if you read the original white paper, so you do have to sit on the on the sidelines There was like all this consultations and there was this white paper and like 40% of the white papers like oh my god Liability liability. There's this new entity. What are we going to do? It's there's no new entity There's the corporations that produce the product There's the deployers who are who are selling the product and there's the end users And those are the three people that can be liable And so at the end of the day that the parliament another commission the commission decided Okay, we're gonna update our product liability and put like two lines into that and we're gonna strip all this crap out about Liability from the AI act. So people do do refactoring in the law, too It's a it's a slow process, but you want it to be slow because Because you need to be able to plan your company So that's why it's deliberately kind of a damped process and we're kind of running under it But isn't like we only get more and more law there is older laws get thrown out and updated But again, it depends on your country, but the EU is pretty good about this and and again one of the things that's controversial of course is that sometimes we're stripping out national stuff for For things that we've signed up to but yeah, lots laws are replacing each other in this that process It is remarkably similar to software. Thanks again. Sure. I'm fortunately We have a run out of the official type for questions But I'm sure Professor Drawner would like to hang about here and then we can chat about all the questions in the hallway track Should we do so this is the coffee break now? Yes, okay, so do you want to let the guy at this microphone ask a question in the coffee break or no We have to do the coffee break for the coffee break. No, we're gonna hang about okay I'm sorry. I will stay here everybody wants to talk to me and please tell me what would be cool to do the Rest of the day to but let's give a huge round of applause You