 Good afternoon. I am super excited to be here in Dublin for my first EuroPython keynote, and hopefully the first of many. And I'm especially excited to talk with you about artificial intelligence and personal responsibility. In my normal role, or one of my roles in Berlin, I spend a lot of time speaking with product people, speaking with developers, speaking with data scientists, and with other investors about what AI means for the future, particularly in light of how many different areas of our lives are currently being optimized by algorithmic products, algorithmic decision-making products, in a sense that everything can be optimized for better efficiency. So to start off with this talk, I came up with a slogan. I want to ask just generally, who knows where the slogan is from? You can't save the world alone. Anyone? No one? Come again? So I'm a big comics person. That's a hint and a half there. And the slogan is actually from the movie that recently came out, the Justice League. It's a really fitting slogan, I thought, because it's a three-part slogan. And it's a slogan that's really relevant to where we are today if you look around and see what's going on in the world. If we were in a movie, I would wager that today's landscape would be just about the time where we would want a need for some kind of superhero, some kind of heroic savior-type character to come in and change things. We are dealing with larger-than-life digital threats. We are dealing with slow, inefficient governments that are desperately trying to keep up in terms of many of those threats. And we're also dealing with rich and or power hungry industrialists who are super eager to monetize every aspect of human activity that they can. So my question is, above and beyond the salvation that might be needed at a broader scale, where do people working in the tech industry fit in? And are we the people that actually might need, if not saving, some kind of wake-up call? Thinking about the integration of technology into our daily lives. I thought of this image from the Matrix movie because I thought that that is probably the closest description of how many of us experience AI throughout the course of our working life and in our social time. It's the big distraction. It's the woman in the red dress that you can't take your eyes off of that everyone is talking about, whether or not everyone understands the implications of artificial intelligence. We are hearing about AI as inevitable. We're hearing about it as something greater than human intelligence. It's meant to be something that everyone will eventually have to engage with. And we're also dealing with, at least in Europe, the sense that we have to engage and invest and compete in terms of getting the most innovative aspects of AI to the cutting-edge level because we're in a race. We're competing against the Americans. We're competing against the Chinese. And somehow, there's either a prize to be won or there's some overall benefit in being first in implementing all manner of automation and automated decision-making systems that exist. I think this is more of a distraction than anything else. And I think it's a really dangerous distraction, and I'll tell you why. We, as people who are working in the tech industry, more so than civilians who are hearing about all the cool things that technology is meant to bring, we are the ones who are meant to be skeptical and who are meant to evaluate a little bit more thoroughly the claims and the promises of any new technologies that we may have to implement. But what we're not often thinking about, talking about, or having integrated into our processes is what are the impacts downstream of artificial intelligence? What I'm talking about in terms of impact is the people whose data is being collected and categorized and utilized for financial purposes, obviously, but primarily to optimize algorithms that then can make predictions and recommendations more effectively. How much do the people whose data is used for these purposes understand what's happening? Then on the flip side, those of us who are working in technology but are not data scientists and are not in the details of exactly what type of data is being optimized, is being vetted for ethical capture, is being tested and modeled. How much do any of us understand about the impacts of the artificial intelligence that so many companies are boasting about putting into implementation? We are meant to be the ones who are checking the tires, kicking the wheels, and trying to understand exactly what is puffery and what is real based on understanding that most things are built in a series of steps and are built by a series of different contributors and there's no magic behind the computer screen. But what usually happens when it comes to AI projects I've found is the opposite. It's not the skepticism we're used to displaying in other areas, it's something else. Agile methodologies are awesome, as most of us who've worked in the industry for years will tell you, because they have created a way to optimize productivity, efficiency, and functional effectiveness in a variety of different media and in a variety of different time zones. It's a way of organizing work that can get you a multifaceted team that's focused on one result and has visibility and flexibility into how that result is achieved. But the problem is that in all of these efficient processes, the point of the projects, and particularly the impact on the people behind the data that's creating these algorithms and these systems often gets lost. It often gets lost because to cite the well known Amazon AI problem, you can be so focused on fixing whatever the technical problem is. In the Amazon case, recruiting more candidates more effectively using models that are meant to sift the best candidates out, that you can move away from what the ethics are around the type of sorting and classification that you're doing. And you cannot have your focus, based on whatever area in the company you're working in, you may not have the responsibility for thinking about or reflecting on or responding to any of the problems that crop up and surprises, that crop up in the course of implementing your algorithm. Which for anyone who doesn't know, in the Amazon case I'm referring to, was a piloted hiring algorithm that was intended to sift through the best candidates. It was pulled out of pilot phase because it was shown to be discriminating against the candidates that had been treated as outliers, which were female candidates because the data, historical data, that was used for this algorithm's training was heavily skewed towards male applicants who had been the most successful historically at the company prior to this algorithm's creation. So despite the functional areas that I show on this slide, the area that I show in the lower left is the one that in the case of artificial intelligence is of paramount importance. It's exactly the questions that are not the responsibility of any of these functional areas. And it's exactly the kinds of problems that are listed, personally identifiable data, data scraped from the web, consent for data usage and in what areas the consent is given. Those are exactly the areas that tend to cause the problems further downstream. I mentioned that many of us have gotten into the tech industry and are working as builders and are working as coders in order to change something, improve something about the world that we're living in. That's not to say that we're not dreamers, quite the opposite. I think many of us would love to believe that some of the amazing technologies that have come out recently and specifically the products and services that have been created are actually doing something cool, useful, or at least adding to the convenience of human beings. When we are talking about cool AI things, maybe outside of the medical industry, we're not talking about identifying cancerous tumors earlier or better than humans, we're talking about slightly less weighty subjects. We tend to be the people who are talking about how incredibly fast and how incredibly big the change is in using algorithms that 10 years ago or even five years ago would not have been able to make the recommendations that Netflix shows are making or the algorithms that are able to basically create home concierge systems that allow you to do everything, start everything, engage with anyone from your car, from your refrigerator, from various appliances, various areas in your home or office that didn't exist except maybe on science fiction, television shows in a previous time. But sometimes, and this is the point that I'm getting to with this highlight about the convenience aspect of many AI technologies, sometimes pretty harmless data and a pretty harmless optimization can lead to a not so harmless implementation based on the ways that commercializing the data change when you incorporate government or local agencies that don't have any particular financial interest but also don't necessarily respect the idea of consent and the idea of data attributed to, connected to individuals who actually have some ownership of the information that they're sharing. Has anybody heard about the ICCL case from just a month, two months ago that has been popularized all over? If you haven't, it's a super interesting case because it's an example of the differences between the kind of data that anybody can find online and that data being used by quasi-governmental and private sector interests to monetize. The ICCL complaint was submitted in May and it's an Irish civil liberties organization that did an investigation into the use of Irish census and postal data and how it was being sold to third parties like Experian, like other multinationals for the purpose of, and I quote, geographic lifestyle profiling. That geographic lifestyle profiling algorithm was also investigated by the researchers who did the, who filed the complaint and what they found was a range of labels that were basically being utilized in assessing credit applications and various other financial services that were being offered or were being refused to Irish residents based on information pulled from the census records and information pulled from the postal service. The really interesting aspect of all of this information flow was that no one in Ireland knew that this was happening and until the investigation and the complaint was filed, no one knew what kind of labels and what kinds of classification were being made without their knowledge on a widespread basis on essentially everyone resident in Ireland. No one knew that they were being classified in the ways described affluent, deprived, whether they're marital status, cohabitation status, cultural background, labor market skills were all being incorporated into an algorithm or multiple algorithms that private sector companies were using to then determine who was eligible for what kinds of financial and other benefits. So the no one the wiser aspect of what was going on in Ireland is a really interesting thing to keep in mind because that's a key aspect of how this geographic profiling algorithm went wrong. What we're talking about is data exchanges versus data extraction, right? Because we know that AI requires massive amounts of data in order to create models and then train them in a way that's more reliable, consent becomes paramount even if we're talking about millions and millions of data points because those data points represent people and those data points may also represent very sensitive aspects of people's lives and experiences. In comics and in movies, where I started off with the talk, you usually get to a turning point where things get so terrible that it's clear that something has to give. In real life, unfortunately, that's not how it works because in real life, what typically happens is that you slowly start to realize based on the experiences of other people that bad stuff is happening to other people and slowly you might start to clue in that, hmm, those people, bad things are happening to maybe closer to me than I thought, they may be more like me than I thought. That's where I'm getting to when looking at the next aspect of AI as classification and ultimately as dystopia. Most of you, I'm assuming, have heard about the recent US Supreme Court having overturned the Roe versus Wade judgment and what that has meant for many of the states that immediately codified that judgment into law. What you might not be aware of is the fact that Google has since the judgment declared that they would remove any location data of any of their users that involves clinics that provide abortion services so they would sift through the data from everyone's phone and web browsers that shows that they've been visiting, they've been checking the distance, they've been in some way in contact with abortion clinics in the United States that could be searched from their history. So when I was discussing this with a female developer friend, they asked me, why? That's great. It's great that Google has done the right thing and has decided not to provide this type of information for all of their billions of users or millions of users in the United States to US local authorities. What's not great is why they have so much data and so much detail in the first place. And that makes me think about what it means for any company, for any government, for any entity whatsoever to hold on to, let alone actively collect mass amounts of data without your explicit consent, not your consent for the data to be collected and held on to, but your consent for the data to be passed to local authorities or international authorities or anybody. Where I'm going with this is what the collection of mass data in of itself, regardless of who's doing it, tends to suggest. No one will probably be surprised now that dark mirror has been on the air and has come and gone that China is already working on gate recognition AI and has been optimizing it and has been essentially perfecting it to the point that it is already in use in many cities and it is already undetectable based on using CCTV footage that can be pretty much operated at will without the subjects knowing. Again, the operative phrase here is without the subjects knowledge. If you're able to use video footage and to run gate recognition software that's intended to identify people at 50 meters distance, what are you really saying about how you want to treat people as individuals? Because it seems like guilty until proven innocent is the way that the direction that that's probably going in. It also seems like that type of usage of mass collection of biometric data and then the presumption of guilt unless there's a match that proves otherwise essentially is making the algorithm that's identifying people by their gate into the decider, right? So it's no longer a human decision that's happening, it's a tool that's being trusted to recommend a match or confirm that there is no match. This would be bad enough if it was only happening in China, of course, it's not only happening in China and there's many, many examples among them a Czech company here in the EU which is not the only company but one of several that has received EU funding to do optimization of gate recognition software again to get the likelihood of matching up to and above 90%, 96% all of which to be done at a distance for purposes you can only ask yourself what those might be. The other thing you can ask yourself is what happens when these tools, these algorithms that we've decided are so much better than humans at mapping data, at figuring out and confirming identities, what happens when they're wrong? We have the examples all around us. We have no longer a member of the EU but in the UK and London this week we have examples of facial recognition trials ongoing. Despite the fact that everybody's seen coded bias, everybody's heard, I hope, about the Gender Shades Project, Timnit Gebru, Joy Bulimwini, Debraji, talking about how this facial recognition technology is not reliable, does not work equally on everyone has led to a moratorium, one year moratorium, IBM, Microsoft, Amazon, all decided, oh, we're not going to deal with all of the negative press anymore, we're gonna limit the sale of our facial recognition tools to law enforcement because clearly there's a discrepancy. Notwithstanding all of that, this was just this week in London. When the technology doesn't work or doesn't work as it's meant to work, the human tendency is to trust the algorithm and this is why you have posts like this and you can expect to see more posts like this that say everyone could see, I could see, other people around could see that this was wrong, this error was clear and yet the subject was not the one who was trusted, the algorithm was trusted and the woman cited in this case got arrested. Behaving as if is a huge problem with relying on algorithms and AI, which is not so intelligent because behaving as if means you're assuming that the algorithm is correct. In terms of the different types of algorithmic software that are on the market, the worst is not facial recognition and unfortunately the worst is not gate recognition from afar, we have plenty of contenders in and outside of the EU that are working on voice behavioral analytics. What are voice behavioral analytics about? In theory, you may see the light bright version of optimized algorithms that make your call centers work much more effectively that may even replace the humans in your call centers because they're so effective and people like to engage with these synthesized algorithmic chatbots so much more than with real humans. In reality, what voice behavioral analytics AIs tend to do is they tend to be applied to a range of spurious uses such as identifying mental disorders from someone's voice, determining the best candidates for a job or the candidate's success rate from the candidate's voice, creating prediction algorithms that determine what the likelihood of credit default is from the caller's voice. These are real things, these are real algorithms whether or not they work is in question but the fact that they are for sale today on the market and not just outside of the EU although most of these algorithms that I've been able to find are produced outside of the EU they're by far not only being sold by non-EU far far away countries and companies. If anybody knows the movie Sorry to Bother You Indie Movie from a few years back, they'll remember that a key point of that movie was that faking your vocal pitch and tone and cadence is actually a way of life already for anybody who's ever faced discrimination on the basis of their accent and their dialect or even having too much vibrato in their voice. And yet, voice analysis for character and for trustworthiness are still being sold at an incredible clip, but that's not all. As everyone has been paying attention to the headlines now Zoom just recently alongside of face-ception alongside of Unifone which are market leaders in their class have already decided that emotion tracking is the next horizon of everyone going online because of the pandemic. So despite the fact that I think everyone who's ever sat in back-to-back meetings understands that your facial expressions may have little to nothing to do with your actual emotional state, this is still a thing. And it's likely to become a thing once Zoom recalibrates and either decides to overrule the complaints and the protests that have been put forward by Fight for the Future and other human rights organizations or other companies that are making great income from rolling out this kind of software and purporting to identify age, emotional state. There are some questions as to whether or not cultural background may or may not affect the relevance of these AI but they're certainly not slowing the pace with which they're being sold which is why Fight for the Future for example called emotion tracking AI and I quote, discriminatory, manipulative and based on the flawed assumption that markers such as voice pattern, facial expressions and body language are uniform for all people. Just a comment here, if you for any reason can't imagine what it would be like for your looks and your facial expressions to determine how you're treated, talk with a woman or an ethnic minority or a trans person and we'd be really happy to educate you on what that's like. It's not great. So what can we do? All of the examples that I gave and hopefully all of the ones that you can come up with on your own suggests one thing in terms of the direction that artificial intelligence is going in Europe and around the world. Basically, we're looking at a future of continued data capture, mass data capture and it's gonna be very difficult to slow that trend. We're dealing with private, public, all kinds of agencies and companies that are trying to create a reality of smart homes, smart watches, smart glasses, smart cars, smart clothing, autonomous stores, just walk out. And despite the best efforts of the digital rights people to bandaid patch these issues with personal responsibility, use private browsers or get a VPN or only message people on encrypted apps or use data poisoning sites like Fox to cloak all of your images so that you can't be the victim of data scraping from the web. It's not enough. All of these things work to some extent but you can't put the responsibility on individuals. So luckily for us, we have one model that works. Does anyone remember if they have worked on web development teams four years back when GDPR came into effect and there was all the grumbling and mumbling about how we're gonna get the cookie consent tools and why do we need to get them anyway and how is it gonna make a difference when we're already so far into the digital age and everybody's data is already available and blah, blah, blah. This is pretty much the next stage of that and the EU model that was so trashed four years ago when it came into existence is pretty much working now, at least in the EU. It's pretty much a forerunner for the kind of extended regulation that we can expect and that we're gonna need in order for artificial intelligence, at least in Europe, to look different than the examples that I've been giving. The EU also has a secret weapon and whether or not your legal buffs or you've been following the latest European Court of Justice news, everybody should have at least a vague understanding that the Austrian privacy activist Max Schrems, who's referred to as either Max Schrems I or Max Schrems II, has two legal cases from 2015 and 2020 that have set precedents that mean everything in terms of web data and where it is sent and who has access to it and what they can do with it. Max Schrems is a real person and his legal case in 2015 was basically challenging Facebook for sending data from the company servers that was collected in the course of normal Facebook use of any person in Europe to the United States where the headquarters of Facebook is so that any US authority local or otherwise could make use of and use that data to surveil EU residents. The positive judgment in the Max Schrems I and then later the Max Schrems II case in 2020 have basically laid the groundwork for what we are trying to do in part with the EU Artificial Intelligence Act. In case you're wondering whether it's actually possible to regulate artificial intelligence given how broad and how all encompassing some of the applications of the technology are, never fear because the conversations are happening now. European Parliament has thousands and thousands of amendments and suggestions on the table that may or may not be included in some part in the act but in 2024 it's happening. That's gonna be the law just like GDPR and all businesses, governments and individual entities will have to get on board. So right now what's happening at the parliamentary level is that you have this pyramid that you've probably seen in some form or other before. It's trying to broadly characterize different types of AI at different levels of risk which will involve different degrees of control or ability of private entities, public entities to engage in sandbox test and modeling activities if they have some kind of framework for doing so. It's going to ideally remove the possibility the companies can do what they're doing now which is just decide to roll out some automated decision algorithm, pull data from wherever they have data available, allow third party or several third parties to contribute to the labeling, tagging of the data and then test it somehow and then roll it out into production before there's any regulation. That's gonna stop. One of the things that's gonna be really important in the future is going to be this concept of imposed versus consensual AI. The difference between the two is agreeing to provide, for example, biometric data so that you can verify your identity on your phone or even in a specific space that is biometrically protected versus agreeing to provide your biometric data such as your fingerprint or a photo of your face during a behavioral interview whereby you're then going to have that image scanned and assessed by an algorithm that you know nothing about. The EU Act is gonna change many things, the EU AI Act, in terms of what's going to be banned. This is hotly discussed at the moment in many, many policy and government quarters but essentially it's not being discussed in industry and that's one of the reasons I wanted to include it here. What's very likely to be banned outright is Emotion Recognition AI. Automated behavioral monitoring of all kinds AI. Health insurance processing algorithms that involve what you're eligible for and what you're not based on health data or sensitive private health data and private biometric databases consisting of photos, videos, any other personally identifiable information that has been scraped from a web interface without your explicit consent for purposes unknown to the original holder or owner of the data. So how can non-data tech team members respond to this? First and foremost, I would love it if everyone recognizes that the current state of affairs is not an emergency, it is not a pipe dream, it is not a far, far away thing that happens in other countries, it is right now reality. And the situation in terms of the most cutting edge researchers working on computer vision, working on Emotion Recognition is dire. The headline on this image is taken from a report from last week, I believe, where the Computer Vision Pattern Recognition Group had a big conference and the outcome of that big conference was that the vast majority of researchers said, it's not my job to worry about impact of the cool algorithms that I'm trying to implement in computer vision. I'm not responsible, not the researchers in this case, but those actually working in industry. I'm not responsible for how this data is implemented and whether it's incorporated into a system of surveillance. So if we're looking at a situation that we have all this data that's available, we have the most cutting edge technologies that are being applied to that data in order to do various different areas of analysis, assessment, comparison, and projection or recommendation. And the experts are saying, it's not my job. We need more people to get involved and maybe not the people who are officially holding the title of data chief or data scientist. For everybody who's not one of the data-specific team members, we need people to understand that this is a little bit worse than the cookie consent tool problem from the web days of 2018. This is about how do we even consider meaningful consent when we're talking about the kinds of data being collected and manipulated that none of us would necessarily consent to if we knew how it was going to be used later. This is a situation that calls for everyone to be a little bit braver and a little bit more curious because at the end of the day, this is brand new technology and most executives have very little understanding of downstream impact and they're not incentivized in any way to care. And the same is true for most tech teams, including the data teams that are actually working with the models and actually recommending the algorithms. We know from working in the industry that tech is fallible and automations fail. We know from looking at the headlines that they often fail on the kinds of people who are least likely to hold power in the tech industry and they're also usually going to fail on the people who are least likely to be listened to when they call out the fact that these optimized algorithms were not optimized for non-binary people, for trans people, for black people, for people who are essentially not somehow the target audience. Don't assume when you're working in a team, a company, an area that's talking excitedly about algorithms and optimizing something for the purpose of faster decision making, more money. Don't assume that others have asked the questions that you ask and do assume that if you have a perspective and your perspective gives you some insight into what might go wrong, that you should at least be able to satisfy yourselves that you know there's some degree of thought that has gone into assessing what might go wrong and where there might be bias in the data that's being collected and optimized in order for algorithms to be produced and put into production. For longevity-minded tech people who want to be in this industry for a long term, you may not have gone through some of these vicissitudes that others have who've been in the industry for a lot longer, like me, since the last time that the dollar was at parity with the euro, the firings, the fundraising non-stop, the pivots to fix product strategy after there's been a scandal. The next year and a half is gonna involve a lot more than hype and a lot more resetting and investigation into a lot of the AI claims that up until now have pretty much guaranteed a windfall of money and a lot of public interest and marketing attention. As an investor, I can tell you investors and risk is like oil and water. They don't mix. And a lot of the things that have been less interesting and important in the past such as environmental, social, corporate governance principles and practices is gonna become a lot more important now that we're facing an economic downturn. You're gonna have a lot more questions about what is the environmental impact of your AI optimized algorithms that are eating up so much energy, your social impact if you're potentially affecting the fate of parolees who are being evaluated in terms of whether their emotions show the algorithm that they seem to signal remorse, which is a real thing that happens in the United States. And in terms of corporate governance, do you have any kind of ethical or data privacy or consent, meaningful consent rules that exist for your company, large or small? These are the things that investors are gonna care about, which means these are the things that companies are gonna care about more than ever. Finally, why should everybody who's working on any kind of team, product, tech, infra, you name it, why should you care about what happens with the AI act in the future and with the direction that artificial intelligence is going in? Everyone should recognize at this point that AI is being deployed in a way that is not limited to one superficial area or use. It is something that there are meetup groups that are meeting every week to talk about who's responsible, what's gonna happen if the technology fails, the algorithm does something objectionable. This is not going to be the same as GDPR, wherein you used to have lots of opportunities to kick the can down the road when it came to responsibility. It was very easy to point to the legal officer or whomever is in the compliance department and say it's their problem, they have to figure out what to do about this web data privacy issue or now that we have a lot of chief data officers, put it on the chief data officer, they'll figure out who's responsible. The reason that the meetups that I cite in this slide in Berlin are 90, 100 people strong is because a lot of developers are asking themselves, am I gonna be the individual contributor who's responsible when the algorithm that I just coded because I was told that's what the product people and executives wanted, am I gonna be held responsible for that down the road? So hopefully everyone understands that this is a very different prospect than web consent for utilizing your web traffic information even if levels of consent for biometric, personally identifiable data, sensitive gender or sexual identity data could be collected. There are so many different ways that that information can be combined and can be sold and can be utilized to create recommendations that a one-click option of yes, you can use my data is very unlikely to be the case in the future. It's not gonna be, it's very unlikely to be a situation that will be solved by one set of regulation or by one representative in a company. It's gonna have to be a collective activity and that brings us to my suggestion for how we think about how we solve this AI dystopia problem going forward. In my opinion, the idea of accountable technology is really about collective accountability. It can't be anything else. In the case that I cited at the beginning from the Justice League, Justice for All, Unite the League, I would love it if you would take away from this talk the idea that you should be trying to unite all interested or all concerned tech workers because that's what we're left with. We need more people who are deciding not to be responsible, not to take responsibility, not to contribute to the development of surveillance technologies. And yes, it's true and while it's true that you can't save the world alone, no one can. We need more people to step up in lieu of the experts who are busy saying it's not their job. We need more people to step up and say, we're gonna be bold and we're gonna be loud. We're gonna ask questions and we're gonna reject the kinds of technological futures that we don't wanna live in ourselves. Thank you. Thank you very much. Thank you for the wonderful keynote. We have time for questions. If you have a question in the room, there's a microphone right here. Okay, we have one, so please ask your question. Hi, thank you for a wonderful talk. I want to say that first of all, I completely understand and believe that marginalized groups are being biased and facing headwinds. And that includes police officers and parole officers and bankers and juries and judges and whatever. I used to believe that even though it's going to take hundreds or thousands of years to fix the human culture with the right regulation, the AI could be more ethical than humans. Do you believe it or you really believe that there is no hope for AI as well? I don't know, I don't understand. Thanks for the question, it was a good one. I don't think it's fair to pose the question of whether AI, which is really a tool, right, is good or is evil or can be saved or cannot be saved. I do think that if we look at AI as a tool like any other technological helper, that we have to be prepared for AI to not only fail the groups that it's not tested thoroughly amongst, but also to be misused and to be used in ways that maybe its developers did not intend, did not think of and it's like writing an essay where good writing instructors will tell you anything that can be misunderstood will be misunderstood. That's pretty much what I think about AI. Anything that looks like surveillance and be really useful for surveillance probably will be used for surveillance, discrimination, elimination of different groups because if the possibility is there and the tool provides it, someone's gonna use it. Yeah, thank you for the talk. Yeah, I wanna question a little bit about the order of the presentation. I've been working as a AI practitioner for five years in quite responsible companies and the teams are quite responsible and the GDPR actually gives a lot of support for developing AI systems. I thought I felt that you introduced the GDPR quite late and I've also felt you marginalized it to the rep domain and it doesn't really apply only to rep or I mean that's maybe the reference that people know the most, but if you look at it, it really applies to AI systems and the AI component that I wrote a little piece about you talked about, but yeah, this is an extension of it but it is not a complete rethink. So why did you wait so long to introduce the regulation angle? I waited to introduce the regulation angle because I think that most people who are not working in the intersection of tech policy and tech production tend to think about regulation as something that's not for them. I wanted to ensure that everyone was paying attention and that understood that the technology can and does affect everyday people before getting into what is this regulation about and when is it going to take effect and what do you need to know about it if you want to effectively question and ideally confirm the kinds of practices that your companies are using with regard to their AI. All right, thanks. Thank you for the amazing talk. I just want to ask you if you have any groups or association or organization that you can recommend so we can join to gather forces and to spread the awareness of AI misuse. Thank you. Sure, in terms of the EU context, there's quite a number of organizations. The ones that come first to mind are EDGRY, European Digital Rights Initiative. They're incredibly active and they're pan-European organization based in Brussels. Digital Freedom Fund, which is a strategic litigation organization based in Berlin. They do the types of strategic litigation that the Maxtrem case represents and they're always organizing various events for awareness about digital rights and surveillance tech. There's another group that's organizing next month, sorry, in September, the color of surveillance and you can look that up not in the US but in the EU that's gonna take place in Amsterdam and those are just the first three that come to mind. There's a lot of them that are based in each European country that I can think of but in terms of umbrella organization, EDGRY's probably the best because they're doing quite a lot in terms of digital rights generally and artificial intelligence as it relates to that. So we have time for two more questions and maybe remote questions. I'm not sure if we have remote questions. Thank you for your talk. Surprisingly, it gave me a lot of hope for the future but I have one, I would say, ethical philosophical problem because humans are also biased and humans also make mistakes and if in this example though, it might be not the best one but it's the best one I can think of about right now of the parole decision. What if we came up with algorithm that, well, it's not perfect but it still performs better than humans. What's your opinion on working with that cases? I think you always have to look at the effectiveness of an algorithm that is making a prediction based on usually historical data points in terms of putting it into application in your own life. So when you're saying it's possible to come up with an algorithm that works better than other human decision making, would that satisfy you if you were a parole applicant and you were being told or given the judgment of some algorithm that's the latest of its kind or would you actually be interested in some kind of fairness that maybe isn't derived from other parolees who look like you or who don't look like you which is what an algorithmic solution to parole applications would involve? I think this is really hard to answer and this is why I started with that humans are biased too and simply this is something I can get my mind around of but still you give me a lot of to think about. So thank you. Sure. Thank you for your talk. I think it's a very inspiring and a great talk. On the topic of data collection and thinking about IoT and more data sources, a lot of times the data is like semi obscure or anonymized. I was wondering what are your thoughts on that and if we're gonna see more restrictions or maybe taxes and data collections in the future? Thanks for the question. I'm also interested in IoT and how that relates to the upcoming AI Act and what kinds of protections may be in place for anonymized data. I think that it comes down to classification of the types of data and what the intended uses. We're talking about the difference between collecting data from your car in terms of what drivers typically do given a certain circumstance. That's fine. That's obviously going to be fine regardless of you know, you AI Act provisions but what happens to the data based on what you're doing when driving your car or what you're scrolling through when using your smart watch only becomes problematic when you're combining different types of data and the usage for which the data was collected is not the usage that the data is applied to in the end. That you're combining data sources that you're leveraging data that was explicitly consented to by a human for something other than the purpose that the data was provided for. So I think that the AI Act is unlikely to change IoT devices being in use or being useful or being consented to by individual consumers. It is likely to change the fact that as we've seen in the United States, period tracker apps or health apps that collect menstruation data are suddenly fodder for local law enforcement or any other agencies or people who are building apps to collect this kind of data to provide them for monetary reward to law enforcement. This is where the overlap is. Where does the data go? What was consented to? And as long as their point of sale, point of service, transactional data that's consented to, that's probably not going to be a problem as long as it's not, as long as it's not biometric data that is personally identifiable. Thank you so much for coming to your Python. Thank you so much for this wonderful keynote. Thank you.