 talk to us today. So Anna's talk is the abstract is, well, the title is called artificial intelligence at the EU borders, ethical implications of technological and political worlds. I'll let Anna actually explain more about, you know, what her paper is as she presents. But Anna is a research associate in the computer science, in computer science at the Department of War Studies here at King's College London. And her research has explored the performance of computational linguistic models and the design of ethical, transparent and fair machine learning classifiers. So concerned about the impact that artificial intelligence can have on vulnerable communities, her interest lies in investigating how governmental actors are implementing it. In the security flows, Anna works on the development of digital methods to better understand the production of the data and technology for broader security. And she will also bridge the gap between computer and social science. So fascinating, really important interdisciplinary research. So looking forward to hearing more about it. And it has kindly agreed to actually speak for 40 minutes, we get to really learn in detail what what her works about, because there won't be any discussant for for this talk. But, you know, please, as audience members ask those important questions, I'll be taking diligent notes to be asking Anna at the end of her presentation as well too. So without further ado, Anna, I'm going to pass the virtual floor over to you. Thank you so much. Yeah, thank you to you Amanda and the School of Security Studies for inviting me to this event. I'm very honored to share with you today, my first contribution to the security flow to security flows and artificial intelligence at the EU borders. So this work is being developed under a security flows project. This is a NERC consecutive run project led by Professor Claudia Arredo. It develops a novel interdisciplinary research agenda to better understand the process of certification at the border, and its epistemic, political, practical and ethical effects. So I've organized this talk as follows. First, I will introduce the context and motivation of this talk. Then I'll explain the main elements related with the identification of the EU borders. And after I'll present the main AI based solutions implemented nowadays for migration control. Then after that, I present the methodology we have developed security flows to understand the certification of these borders by analyzing public procurement documents of two well known EU agencies. And then finally, I'll reflect on the ethics of this technology and that with conclusions of today's talk. So we will start with introduction. So nowadays, we're witnessing an increasing use of artificial intelligence at borders for migration control, multimillion euros projects to connect databases with travelers that contain travelers information, biometric systems to identify and verify asylum seekers identities, or even social recognition technologies to detect deception on travelers faces. So focusing on the EU context from Texas annual is are the main agencies regarding the certification of the EU borders. On the one hand, from text was established in 2004. And that's before the control of European Schengen area. It has it has created the first EU uniform body. And its budget has been increasing consistently in the last year being the budget of this year, the current year, 543 million euros. On the other hand, you Lisa was established in 2011 so far, and seven years later than done from text. And it's the goal of your Lisa is to ensure the operation of large scale IT systems within the area of freedom, security and justice. So basically, Lisa is designing and implementing all these large database systems that contain information about migratory movements. In 2000, in 2000, so last year in 2020, they announced the most expensive contract in its history. It was a contract valued in 300 million euros to design database that is going to control all the border movements of non citizens, non EU citizens. So it is worth noticing that these agencies are operating together, so they collaborate together for designing these smart borders for the sake of secretization and efficiency. So despite of the large amount of money that the EU is investing on AI and identification technologies, I question today whether borders are nowadays smarter than before than before implementing all this technology. And also are these borders that efficient so is actually technology improving or enhancing the efficiency of borders of border control. So focusing on the identification of the EU borders, we have to take into account Dublin regulation. Dublin regulation is a key aspect in the certification process of the EU borders. This law determines which EU member states are responsible for the examination of an application for asylum. It basically constrains freedom of movement of asylum seekers that arrive to Europe. Under this law, EU member states are enforced to fingerprint migrants so that they can be then identified if they travel to other countries. So imagine that you're a migrant that arrived to Spain, the police officers in Spain will fingerprint you, and this fingerprint will be stored in a database. So then, if you claim asylum in Spain, under the Dublin regulation, Spain is in charge of your asylum case. So then imagine that you have some cousins in France and then you want to travel to France. This under the Dublin regulation is considered like an illegal movement because if the French authorities find this me find you in France territory, they're going to fingerprint you. And then if they find your fingerprint in this database that they are like, all the EU members are have access and can access to the old fingerprints. So if they find your fingerprint in this database, they can't deport you back to Spain because under Dublin regulation, as I said before, Spain is responsible for you. So you cannot leave the country where you're asking for asylum, claiming for asylum. So which are these databases? So in the EU context, we have like different databases that contain different information. The most relevant in that case is EuroDAC. EuroDAC is this database that gathers information of all asylum seekers that arrive to Europe. And then basically, it contains information about names. So biographical, biographical information like names, surnames, date of birth, country of origin. It also stores fingerprints, as I say before. Nowadays, they store fingerprints of individuals 814. But there is like, they are planning to gathering fingerprints of children eight, six years, like six. They are also the EU is also planning to store a page, face images to a part of fingerprint of fingerprint recognition, fingerprint recognition, they want to also identify individuals through face images. So this is EuroDAC. And then we have these information systems. So this in this case, this other database contains information of all the individuals that are applying for a visa in Europe. And similar to EuroDAC, it also contains biographical data like names, surnames, date of birth, fingerprints, and also the EU is planning to implement facial recognition in this database. Finally, we also have the change in system, the change information system. And in this case, it contains information regarding criminal activities. So it contains images, for instance, of individuals that are related with a criminal investigation. It also contains genetic data. It also contains fingerprints, information of, for instance, like objects like a car or tattoos, etc, etc. And then the next database that I wanted to present today is the entry exit system that is going to be enforced next year. So the EU plan to implement this system this year, but it has been delayed. We still don't know why they sometimes say that it's because of COVID, they sometimes say that it's because of, you know, implementing all this technology is not that easy as, you know, you might you might you might think. So in this case, the entry exit system is going to gather information of all non citizen non EU citizens that are are gonna are crossing borders. So in that case, the authorities will know when your visa expires, and if you're illegally like in the European Union. So how these databases are impacting on people's life. In this slide, you can find text that we found that we found in a document of an asylum application. And in this case, a judge, the judge was saying that this asylum seeker was like here. His application was denied because the judge found that in Europe that it was his fingerprint and he was fingerprinted in 2016, while the asylum seeker was saying that he left his country of origin in 2017. So as I just found this in current between the narrative of the asylum seeker and the Euro DAC information, he says that this evidence undermines the credibility of his entire account and his credibility. So the result was that the judge denied the this case. In we also found this other case in this paper published by George's by George's local field seals. And they were investigating how in Germany, these systems were also like impacting on on migrants life. And in this case, they witnessed a case where a pregnant woman was asking for a visa to stay in Germany. And what happened is that the authorities in this case, like a very similar instance in the V system. In this case, like they fingerprint that pregnant woman, and they found like a similar fingerprint in this system. But the names were like slightly different, right? Like she was saying that her name was Marie V, but then in that system, and in the database, the name that appeared was mother V. So they started like, interrogating her despite of her, her health started right as a pregnant woman. And so in these two cases, we are like, identifying and analyzing an examining how these systems, this database system are impacting on this case, as soon seekers on migrants. So but how what is artificial intelligence and how is nowadays artificial intelligence used at the border and implementing. So before starting analyzing AI at the border, I wanted to bring this question. What is artificial intelligence? So to ask to respond this this question, I would like to first ask what is intelligent, right? And we found on the on the literature, there is no consensus about what is intelligence, because we know that there are like six different types of technology of intelligence, sorry. So if we are not able to define what is intelligence, how can we be able to define artificial artificial intelligence, right? But however, the European Commission, the experts of the group of experts in AI came up last year with this definition of artificial intelligence that say that artificial intelligence systems are so were designed by humans. And they are in the physical or digital dimension, dimension through data, data acquisition, and then they process this information, and they can decide the best action to take in order to achieve a goal. And I have some concerns with this definition, because first of all, this goal that the artificial intelligence have to achieve doesn't need to be complex. It can be a very simple goal. Secondly, artificial intelligence cannot perceive the environment. It just like analyze data. And it's true that data can be like representation, a digital representation of our environment, but it doesn't perceive the environment as we as a human being or all animals can perceive environment. And finally, reasoning, artificial intelligence cannot reason like artificial artificial intelligence is just based on mathematical formulas that try to optimize a problem in order to achieve this goal. So it doesn't reason. It's just like computation, mathematical computations that computer that computers can run in a very efficient way, right? And recently, there has been also controversy towards this concept of artificial intelligence, because we are aware that artificial intelligence is not artificial, neither intelligent. It's not artificial, artificial, because first of all, it's made of natural resources. So in order to implement artificial intelligence, you need a computer to have a computer, you need natural resources, and then you need to go to the ground in order to take these natural resources to design your computer where you're going to implement this artificial intelligence. So it's not very artificial, because it depends on the truth. And then it's not intelligent, because as I say before, it's just like run mathematical, mathematical computation. So if we consider that the computing all these mathematical functions is intelligent, then we could agree that we consider that artificial intelligence is intelligent because it can solve these mathematical formulas. Otherwise, we like if we consider intelligence as, you know, social intelligence or music, musical intelligence, we cannot consider that artificial intelligence is intelligent by design. And also I wanted to bring today this graph that clearly shows the different concepts, key concepts of artificial intelligence. So artificial, artificial intelligence, as you might know, is a subfield of computer science. And then within this field, we can find machine learning, which aims at analyzing patterns in trends in big data data set. Then within machine learning, we also have these other subfields of artificial intelligence that it's named deep learning. And deep learning is basically, when you take a machine learning algorithm, but you make it a bit more complicated, you implement more like lawyers into this algorithm in order to, for instance, analyze an image, because like machine, usually machine learning algorithms are not able to analyze images or text or other type of, of information of data of data, why deep learning was designed specifically to analyze this other type of, of data. So in this specific subfield, we can find biometrics. Most of the algorithms, most of the biometric systems that are implemented nowadays are based in deep learning algorithms. And then next to artificial intelligence, but it's not within this, this subfield, we can find data science and big data. So data science is this field also like in, in computer science that focus on designing approaches to deal with data. And we can also find like data engineering that is focused on how you design your data set so that an algorithm can analyze all these data. And then we also find these other fields, which is big data. And I see that because sometimes we confuse these, these three concepts, right? AI, data science, a big data. And I wanted to show here that there are like slightly different concepts. So big data focus on designing algorithms that can process a big amount of, of information. So when you have like a big data set, usually classic machine learning algorithms or other like type of algorithms cannot process all this information in just one run. So with big data approaches, what you can do is like to divide this data set, this big data set in different, in different sets. And then you implement the algorithm in a parallel structure, and then you can analyze then at the end, right? So it's basically these approaches that can divide this big data set into small data sets. So the algorithm can be implemented there. So which projects are nowadays used? Which AI based projects are nowadays used at the EU border? In this, in this document that you can find online, I share the link to this document that was released this summer, analyzing AI, AI at the EU border, we can find like a broad summary of the different solutions that are nowadays implemented. Some of them have been canceled because of ethical concerns. Some of them have been penalized as the I border control, which is this project funded by the AIDS 2020 scheme where they were developing technologies to, to make border smart. And so in this talk, I wanted to focus on biometric systems. So biometric system is, as I explained before, as a field of AI, because they use deep learning algorithms, and how it works is as follows. So you have like a data set with a lot of, a lot of for instance, fingerprints. And then what you want to analyze is the similarity between fingerprints. So that if you go like a similarity score very high, that means that these two fingerprints might correspond to the same individual, right? So once you train this system, when you have a new fingerprint, you can compare this new fingerprint with all the fingerprints that you have in, in your database. And then the system has to learn how to extract the patterns of how to analyze the similarity between these two images. And then it finally it got like a score of similarity. And based on this score, as I said before, is it's very high, it's gonna, it's gonna say these two fingerprints belong to the same person, it's like a genuine, it's a genuine attempt, right? When the system says the system gets on the score that is very low, it's below a threshold, then it says that in this case, there is an imposter case. And this might happen when the two fingerprints belong to different persons. But we have to realize that algorithms are not 100 precise at all. They will always be a percentage of errors. And the errors that biometric systems commits are can be classified in two type of errors. So the first one are the false match FM. And these correspond when two pieces of biometric data from different people are just to be from the same person. So in that case, the system says that they belong to the same person when actually they don't belong to the same person. And secondly, we also have the false, false non match. And these happen when two pieces of biometric data from the same person are just to be from different person. So what happened nowadays is that in the EU, there is a critical lack of accountability of biometric systems deployed in the databases that I explained before. So if you remember the cases that I the case studies that I presented today, that that the secret that got that rejection from the UK government or that women asking for a visa in in Germany, they could be false. They could be false match. So that the judge found this fingerprint in your that doesn't mean that it corresponds to the same individual. It could have a false, a false match. It could belong to another different person. And what we are analyzing now, now in security flows is this guy is this type of first why don't we have this? We want to create this awareness towards the use of biometric systems in this very sensitive basis. So to better understand and analyze the bio biometric systems that are being deployed in the EU, we have came up at duty flows with this methodology to better understand all like who is creating who is designing these these biometric systems in in the EU borders, how much money is investing on these technologies and how well how connected are these private actions? So and to do that, we have first like analyze and download all the contracts on the public procurement documents from the e tender e tendering platform, which is the EU platform where you can find all the contract over notices and they are publicly publicly available. So after we download all these documents or these contracts, we structure the contracts into a database because algorithms cannot automatically analyze text. So you have to give to this information and structure and the structure that we have, we have given is a data set with where every role is a contract and then every column every feature is a section of this of this contract saying, for instance, the contracting authority, the company that won this contract, how much money, the idea of the contract, the title, the title on the summary. Then once this information is structured, we clean the data. And then after we clean the data, we run these visualizations in order to better understand the relationship between the notification of the new borders and the connection with the private sector. So basically, we wanted to answer these three questions, which are how much money has been invested in the new borders, which are the most expensive contracts, and which are the most our contractors. So after we got the information and structure information, we work on these visualizations where we are showing how much money have been invested in the private sector. So on the left, left hand, you will find the information, the data corresponding to you, Lisa. And you can see that the budget has been increasing over the over the years. And also the same trend can be identified, identify in front sex contracts. Also, we have analyzed the type of contracts and we found that, for instance, you Lisa invest basically on IT systems. So they were contracts to design and maintain IT systems, while front sex has like they invest in more heterogeneous type of contract like you can find also like software, software solutions, but also the quotation flight, same as a lot of money in the quotation flight, or surveillance technologies like drones. So which are the most expensive contracts in your Lisa last year, they release these contracts to design and implement the entry exit system. And it was a word with more than 300 million of euros. And the second most expensive contract was related with the maintenance of this. And what happened is that when you implement these database systems, they used to be they need to be maintained like for instance, they run out of memory because they source too much information. So they run out of memory. So they need to release these contracts in order to update the memory of these systems. And also interoperability, interoperability corresponds to the third more expensive contract by your Lisa. And this concept. So what they are planning to do is like to connect different like databases, national and international in order to share more information for border control. On the other hand, the most expensive contract of context is this contract that they're that they released in last year also for hiring aircraft systems for maritime IRL surveillance. So basically front exit investing money on drones and our planes, our planes for surveilling the the new borders. And the second most most expensive contract was related with return flight. So basically deportation flights to the port migrants. And for the most which are the most of our contractors. So we found for instance, at the new Lisa, ID me also for studio are two very well known actors, private actors in this context. They usually got all the contracts to design and implement your that piece sees. And then on context, we find that there is like more to genius. Private actors. So we find all these companies related with aviation to develop these maritime IRL surveillance surveillance systems. But also we can find traveling agencies. And these companies are related with these deportation flights contracts. So we wanted to analyze the connection between private actors, because we found that usually they don't they cooperate between them. So when they apply to one of these tenders, they apply together as a consortium as a group of economic actors. And this is what we can see in the connections in this network. This is an issue that shows when like companies apply together into a contract. So we can see that 3M bedrooms super studio are like companies in the ULISA contract system that used to apply together and they bought basically the most expensive contracts. While in front of us, we see that the network is more dispersed and we follow these different clusters of private companies. So also within methodology, we were able to analyze how much money has been invested in these four database systems. So the most expensive and as I say before is the ES that was a board this contract was a board by super studio to super studio and ademia with more than 200 million euros. And then super studio was also in the contract of cheese and urodac. And while in this table, you can see a summary of how much money and to which companies have been aware. So through this methodology, we were also able to analyze and get more details about the biometric systems that are implementing are implementing nowadays in this in this large databases. And as we know that, for instance, in in urodac or in the entry exit system, so prostitutes and ademia are the ones that were going to design or maintaining the systems who were able to analyze the metrics of the biometric systems that are implementing in these databases. So in this case, and this plot shows the error by ethnicity of the facial recognition algorithm developed by ademia. And we can see that there are like slightly like differences. So there's this right impact on difference of gender and ethnicity. So for instance, we can see that errors are larger based on our larger on female compared to males. And also if we analyze these from an interceptionality perspective, we see that Indian women errors are higher than for instance, why also using this methodology, we were able to identify and detect other AI solutions that in this case, from text is investing. And we came up with this. We came up with this AI based product that from text as in has invest more than 2 million euros. And in the system that is going to predict the risk of the risk of vessels in the Mediterranean Sea. So they are developing this system where they are going to connect different databases containing information about vessels, and then the system will give to the user a risk score. We don't really know the terms of this risk concept. So risk in terms of what we still don't know that. And it's also like relevant to to stress that it has been a word to an Israel company. And also like a frontage on other agencies like ESO are investing resources to try to forecast migration flows. And in this case, we see this system, these approach using data from frontage and ESO and Google Trends and social media to try to forecast migration flows. So as a computer scientist, I asked myself like, can we forecast migration flows? And the answer is no. It's very difficult to forecast migration flows with a high percentage of accuracy. And we see here in that plot that so the black line corresponds to the truth of the the ground truth of the migration flows. And then the red line corresponds to the prediction of the system. And we can see that the lines differs a lot. So it's very hard to try to forecast complex, complex social activities or events like migration flows like AI is not gonna is it won't never like develop a system that is super accurate to forecast this type of social events. So moving towards ethics. When we talk about ethics of AI at the border, usually discuss algorithmic discrimination and bias, accountability, the accountability of the systems, transparency and explainability of these algorithms use at the border. But we I think that with this debate, we have to go beyond and we have to discuss our ethics, our borders ethics by design. Because I am in this project, I realize that AI is just another lawyer, a lawyer to that is using in the border. So basically, I'm I propose to move this debate on AI at the borders to our borders ethics by design. And so to sum up. Yeah, so today I present that databases and biometrics systems are widely applied at the borders to criminalize migrants. That there is a need to analyze the impact on human rights violations of these systems, because we are not aware that they are not 100% accurate and that they have errors and we have to be aware when we are implementing this in this context in the migration contracts. And for this to analyze these implications, we need interdisciplinary teams. So teams made of computer scientists, lawyer experts, and social scientists that can analyze this scenario. And finally, as I showed today also, there is a lot of transparency and accountability of all the solutions developed by the private sector, because we don't know, still, which are the metrics of these biometric systems, or how they're developing these other solutions. So thank you so much. And yeah, I'm looking forward for your questions and feedback. Great, Anna. Thank you so much. What a fascinating talk. I'm just going to try and change the view here. So we can see, oh, good, we have a few questions. I have questions, but I'm going to try not abuse my position of chair and open it up to the audience as well too. So, okay. Carolyn Gibson says very interesting would be interested to tie this research to the UN Security Council resolution on women peace and security as a mechanism of applying the research to action as well too. Yeah, I mean, Carolyn, that's a really interesting point. And I think this is why, you know, more feminists who research on WPS talk about, you know, how the WPS agenda is still very much state centric and doesn't necessarily apply the private sector, but also the idea that, you know, women migrants or refugees are on the move, right? So I guess, yeah, for me too, that's a really interesting point to reflect on. Have you thought about how, you know, your research and AI technology borders, how that fits in with the UN's resolution and the UN's, I guess, broader action plans around women peace and security? Yeah, so I think that this year, the UN released this report analyzing discrimination at the U borders, using technology, and it's very interesting because they talk about discrimination, they talk about algorithmic discrimination, they also talk about connections of technology at the border and private companies. So I think that the UN is being, you know, more and they are analyzing this, all this, as you know, right? And also like talking about the EU this year, in April 2021, they released the first legal framework to regulate AI. So I think that this is going to be the trend that we will see in the coming month, like all these national and international organizations creating this awareness towards biometric systems. AI recently has been also a discussion about facial recognition at the European Parliament and they want to bond the use of biometric systems in the public spaces. So I really think this is going to be a trend in the incoming months. Yeah. Oh, Clemens is asking a question around, I guess, procurement. He says in the procurement contracts, did you actually find that the EU awards contracts to technologies emerging from horizon projects? Or are these coming more from an exclusively private side? And then I guess I extend, and Clemens won't be surprised by my question, too, is, yeah, like unpacking this relation, is it even useful, I guess, as you know, Rita Abrahamson, for example, talks about, you know, security assemblages, right? Is it even useful to unpack state versus private? Like, what sort of relationship is going on there? And when we're thinking about innovation and development of these particular technologies? So that's my tag on question to Clemens. Yeah. So we have to think that all the digital digitalization in governments are related with a private company, because the government, they don't have the resources to develop all the certification out of this digital, digitalized technology, right? So every time that the process is being digitalized in, in governments or in the welfare system or in other borders, there is a private company that is taking this contract and is developing this technology. So this is related. This is highly related. So every time that we want to analyze a system, we will find a private company who is developing the system for the government or another, you know, authority. So regarding the procurement methodology, we focus only on the contracts, private contracts, right? So all these contracts that these two agencies are releasing and are awarding. We didn't connect, we didn't connect these contracts with the H2020 program because this program is more related with researchers. So I know that also like private companies can, yeah, we're under this scheme. But yeah, we haven't, we haven't matched all this information, although my colleague Sara Perrette has recently published a paper analyzing all Horizon 2020 projects related with the border control, right? And I found, because I'm familiar with some company's names, and I found, for instance, some companies that appear in both, in both contracts. So in H2020 project and also in the contracts that we analyze. Great. Alvina has a question she's going to ask live. So I think you can do that now, Alvina. Great. Thank you so much, Amanda. And thank you so much, Anna. This is really, really fascinating. You know, I'm a big fan of your work, and it's always nice to kind of see you talk about it and have these beautiful slides. Just a quick maybe comment about the UN question, which I found really fascinating, raised. I think it's important to know that it's not like an official UN report, but it's actually special reporters who are writing these reports. So they are kind of not paid by the UN and they're able to be really, really critical and do this kind of critical analysis, sort of, you know, the UN might be quite happy to claim that they have these really critical human rights experts out there critiquing, you know, problematic practices. But yeah, so it's quite important to see, you know, how does the EU and the UN interact in these more institutionalized spaces and how do people claim or are able to kind of distance themselves from these institutional dynamics in their kind of more activist human rights lawyer, or having them more activist human rights lawyer heads on. That aside, let me ask you three questions. One is about the definition of AI proposed by the European Commission, which I find really interesting, because as you said, the literature, we as social scientists, computer scientists know there is no fixed definition. And yet what is the effect of proposing such a kind of static almost once and for all definition that kind of removes all struggles around trying to define AI? And who is part of, you know, defining what AI is? Again, our social scientists are the kinds of academics part of this debate or not, because the EU kind of traditionally tries to exclude academics from talking about these issues of security. My second question is about interoperability and trying to connect national and international databases. Again, sort of proposed as if it's just, you know, an easy move, let's just connect all of them and we have even more data to work with. But some of the discussions I've been following with actually said it's quite problematic because there are so many differences in a day to day collection, how people actually collect this data might make a lot of errors. There is no systemic kind of approach to this. And so they're actually dealing with really kind of banal problems almost like we have to train these officers and before we even able to kind of harmonize all these different national databases. So I'd like you to reflect a little bit more on this kind of banal almost day to day administrative reality of collecting data before it even becomes presented in these nice databases. And finally, could you expand on your last point? As you said, you'd like to kind of provoke this thing about I think you said borders are ethics by design. Can you expand on that notion a little bit? I think that sounds really fascinating. Thanks so much. Thank you, Alvina. Wow. Thank you so much for your reflection. Yeah, I also like talking about your reflection on the UN question. So I also find like I find kind of incoherent things with these authorities narrative because, for instance, like the EU is saying that they want to ban and the use of biometric systems in public spaces. But on the other hand, they have implemented all these database systems that are taking fingerprints and they want to implement facial recognition on migrants. So it's like these two narrative for me is like the lack of incoherence, right? Because on the one hand, you're saying that you're going to ban facial recognition in public spaces, but on the other hand, you're using facial recognition at the border. So yeah, it's so interesting to analyze these these spaces. And so I'll change your first question regarding the definition of AI. It's very interesting. And thank you for bringing this to the conversation because if you analyze who is who define or who propose the team that proposes the definition of AI is a team basically made of people working in private companies and there are only like few academics. And also if we analyze gender or the ethnicity of this team, we will find that it's basically white male. So thank you for bringing this to the discussion regarding interoperability. I'm a bit skeptical about this project that the European Union is investing a lot of money because I don't think it's going to happen. Like when you really know how like how you you merge two databases, you see how difficult it is. And it's more difficult if, you know, you will be now you create your own database and then me and I create my own database and then one day we decide to merge it together. It's very difficult because you will use your own system. I will use my own system. You will use your own code to, you know, create your ID column. I will create my own IDs to create my column. So it's very difficult to merge all these different database systems at both level, national and international. So I'm quite skeptical and also like if you analyze how like different authorities work nationally, like I'm getting very familiar with the different databases that they are using in Spain regarding migration. And for instance, so in Spain, we have like two kind of authorities. One is Polysia Nacional and the other is Guardia Civil and they have their own systems and they don't share the information between them. They are quite skeptical and they're also like quite skeptical to share this information with other authorities like Frontex. So if there is all this, if you analyze all these cases, I really think that interoperability is not going to happen. But yeah, I can be wrong absolutely. And then regarding border ethics and, you know, my reflection on ethics of AI of the border and borders questioning like if borders are ethics by design. So I bring this discussion because I'm migrant, I'm a privileged migrant in the UK because, you know, I have my right to work here. But I can see how like borders are discriminated by design. So basically using an algorithm to discriminate to discriminate nationalities is just like another layer, another layer in this scenario. And I'm going to explain the case of the of the UK visa system that was deployed two years ago by the Home Office. And then it was strapped because an organization in the UK found that it was discriminating African nationalities. So basically they were implementing an algorithm to assess visas in the UK and they found that the algorithm was given like a high score of rejection to African nationalities. So they stopped using this algorithm system. But my question is like, OK, you stop using this system, this algorithm, but this system, this algorithm also like show how like before using this artificial intelligence solution, you were also discriminating because what they were like to train this system, they were using data, data of previous cases of visas, right? And then if the algorithm is basically analyzing patterns and if they found like this system found the pattern that African go like a higher score of being rejected, this is telling that borders are discriminated by design because they were discriminating African people before the implementation of the system. That's great. And maybe this is just a really naive question. So I apologize for that in advance, but, you know, I think. Yeah, I borders as discriminatory that that seems to be a pretty established critique, at least in critical security studies and whatnot, right? And I guess, you know, maybe I don't I don't know if you have any suggestions, but what do we do, right? Like, given all of the given, you know, kind of the your important critique of AI, important critique of of borders being discriminatory, is there I don't know, is there any sort of ways in which I are AI technology can, you know, can work in a way that you know, reduces the discrimination reduces the, you know, these these social injustices that that you've highlighted so well. I don't know, that's a question. That's a fascinating question. So I really think that AI could be used as a way of accountability. So in the case that I explained before with the UK visa system, so the algorithm found the pattern that African nationalities were being discriminated in the past. So the algorithm was showing this this fact. So I do think that AI can be used as a way of accounting, right? I also like analyze police data in the U.S. in London because all the data sets are open source. And I found that, for instance, police data is biased. So they saw more like black people than white people. They have more activities. They are more present in poor neighbors and rich neighbors in neighborhoods in London. So I think I do think that algorithms on AI can be used as a way of accounting and making more explicit our social injustice because it's a way of quantifying this discrimination. Yeah, and like you've highlighted with the UK border, making that much more visible, right? Rendering that while moving it from kind of subjective decisions paperwork to something that, like you said, as a clear pattern and algorithm. So thank you so much for your fascinating talk. I've I learned so much. It looks like from from the comments that everyone has learned a great deal from you. So I look forward to seeing this materialized into a paper or a book or or something else in the print. I and I want to thank the audience for for participating, for engaging and for making this series such a vibrant space to exchange ideas.