 Our first speaker today is Mr. Haroon Khan. He's part of the Tilburg Law and Technology Department here at Tilburg University. And he is going, his interests are mostly with AI, AI ethics, that sort of thing. And yeah, he's going to, I'm going to let him explain his own topic. I think that's the best. Why don't you let me explain? Can everyone hear me at the back? Yeah, can people at home can hear as well? No, why don't we make these things so small? All right, hi everybody. My name is Haroon Khan. I'm one of the lecturers in the Tilburg Law and Technology Department over in the end building, all the way up on the seventh floor. You can see for miles, but I'm from Switzerland and we live in Holland. It's all flat, so it's very disappointing. But I'm not going to, I'm not going to dilly-dally. I'll just jump into things right away. Title of this presentation is When Hal Goes to Court. Show of hands, who's seen 2001 A Space Odyssey? Who's familiar with the concept of 2001 A Space Odyssey but hasn't seen it? Big scary computer kills everybody. Spoiler for you. So that's sort of the nature of it. And in order to do this, I wanted to open with a quote by Arthur C. Clark. Arthur C. Clark was a very famous science fiction writer from the UK. And he had these three laws which sort of govern the overarching premise of what science fiction and what technology is. And the most famous one is this one. Any sufficiently advanced technology is indistinguishable from magic. And my grandmother grew up in a rural area in northern Pakistan in the 1940s. And she uses Skype and Zoom and Twitter. And I've asked her, what do you make of this? And she said, it's magic, it's magical. Even between the generation now and the one that's coming after us or the one that came just before us, the leaps and bounds we're making now are massive. Which is thanks to, in large part, Moore's law. I won't bore you with that now. But if anyone wants to nerd out with me after this, then they can come find me. To get started, I wanted to put this up. Feel free to scan it and go ahead. Make sure your phone volumes are all turned all the way up. Not down, up. Okay, for those of you who know this, this is called a rick roll. It's a typical sort of internet prank. And the reason I chose to put this in here is because it demonstrates the degree to which we are dependent or trusting or relying on technologies. You didn't think twice about what this could have been. It could have been a poll for all you knew, right? But it wasn't. So, got him. All right, is this working? Yes, it is. Okay, I wanted to start this by prefacing that as far as me and the other speakers go, I'm very much the lowest person on the ladder. I'm the least qualified and have the least experience. So what I'd like to deliver for you guys is sort of a crash course introductory tutorial to the concept of AI, AI ethics, and the legal relationship between the law and artificial intelligence. So with that in mind, I'm a very big cartoon nerd. Those of you who are in my classes will know this. And so I use cartoons to illustrate these sorts of principles where I can. So, let's read the room. I show of hands whoever wants to answer, where is AI in our daily lives? Where do we find it? Are you all that shy? Orange sweater? Where does AI manifest in our daily lives? Hey, Siri, right? Yeah, Siri or for writing texts. Sure, so chat, you said that quick, do you use that? You don't have to answer that, don't answer that. There was a hand in the back too, yeah. Writing my thesis? You're not a student of mine, are you? OK, no. What else? Just kidding. In the glasses right there with the short hair? Yeah. Chat, GPT has been said by everybody so far. Well, I tried to avoid AI. I'm scared. I don't blame you. That's why I say thank you to Alexa, just in case, you know. On the way to work. On the way to work. Navigation, Google Maps, navigation, things like that, right? Right on. OK, super. So, I mean, it's blindingly obvious by this point that AI is something that's basically inextricable from our daily lives. Did I traumatize you? I'm so sorry. A little bit? OK, I won't pick on you anymore. OK, this is a little bit more of a heavy handed question. What is bias? Can anyone explain or outline what bias is? In the yellow sweater? Yeah, I'm making you run now, I'm sorry. I would say certain preconceived notions and ideas of what people or things are like. Sure, OK, anyone else? Daniel, one of the few people I know by name is I'll pick on you. I guess similar to what you said, but based off of prior experiences of what we have these biases for. OK, so I think we can all agree collectively, bias is sort of an unfair preference of one class, one group, one demographic over another, right? So we've looked at what AI is, we've looked at what bias is. How does AI bias come about? Are there any data science people in here? Oh, perfect, so you didn't think I was going to do this, did you? What are some examples of AI bias? How does it come about? I just started, but I will try then. I mean, it's getting the data from somewhere. It's AI is now taking the data that has been there. So I guess not everything is put into data till now. So it might be biased in the way of where it's the data coming from. I was really hoping someone would say that. One swing, one hit. Perfect, thank you. What I'd like to talk about is where much of the AI bias we see today comes from and any ways that we can treat it. And there's a difference between treating something and curing something, and I'll get to that a little bit later. So I want to talk a little bit about data input bias or human bias, and it's very simple. There are inextricable biases in human-generated data that are used to train AI algorithms, and the AI algorithms connect the dots either actively or passively. They might connect dots that we don't see because they're smarter than we are. And they replicate these biases, which results in propagating these disparities and these discriminatory practices. And there's a name for this in this field, Gaigo. Does anyone know what Gaigo stands for? Gaigo stands for garbage in, garbage out. So it's another, so if I was to water a plant with toxic waste, the fruit that grows from that plant's gonna be poisonous, right? And that's exactly what this is. If we train AI using data that we know is wrong, or even if we don't know is wrong, it'll point out and reinforce and strengthen things that are wrong in the data, and then we realize and we think, oh shoot, that's not great. So there's a couple of case studies I wanted to jump into, but before I do that, there's a couple of things. Anyone know this cartoon? Just say it, shout it out. Powerpuff Girls, thank you. I don't know if I'm showing my age here because I know this is like a 90s thing, but Professor Utonium has this secret formula for how we created the Powerpuff Girls and it was all this big mistake, but bias in inaccurate training data plus systemic and historic injustices and poor AI regulatory framework and the lack of legal precedent results in unreliable artificial intelligence, the propagation of existing injustices and the calling ridged dilemma. Does anyone know what the calling ridged dilemma is? Oh good, I get to explain it, super. So the calling ridged dilemma is this phenomenon, this sort of double-edged sword where in order to understand a technology sufficiently, it has to sort of permeate and penetrate into society to a point where it's being used by a very large audience because otherwise no one really knows how it works. And the problem, the caveat with this is that if a technology has become so commonplace in our society, it becomes very hard to then rein it back in and there's an expression for this, you can't put the genie back in the bottle once it's come out, right? Or like, you know, the cat's let out of the bag or I like using idioms a lot, but that's why the genie is here. Fun fact, Robin Williams improvised so much of the script in this film that they couldn't submit it to the Oscars for a scripted movie. So we're gonna take a look at a couple of case studies here, anyone know this show? Thank you, it's my favorite show ever, I wrote my grad thesis based on an episode of this, we can nerd out about it later. This is the first one, I wanna talk, this is not about AI, this is about technology and manufacturing and things like that. So I wanted to start with this and then we'll go into more complicated and more layered things. Sexism and the automotive safety industry, can cars be sexist, can cars be designed in a sexist way, hands up for yes, hands up for no, hands up for I don't know how to drive? Okay, so I bring this up because the crash test dummies, these guys are designed in weight and proportions based on the average male physique, male anatomy, because of this, the way that the three point seat belt is designed and the way the airbags are designed are optimized for the male physique. When people realize this, they said oh shucks, we better start testing for women too, so they just made the dummies smaller and they put them in the passenger seat, which is sexist for two reasons, one it implies the man's always driving and two it implies that the female physique is just like the male physique, only smaller. You can tell like an old white man made this call, right? So what this results in is car crashes are more fatal for women than they are for men because the safety systems are not designed for them. So this is an example of bias, it's not quite AI related but we'll get to one of those in just a second. I'd love to talk about this, I really would, but we have to keep going. So these are some of the more classic case studies that many people who have studied or dabbled in this area will be very familiar with. The first one is the Amazon one, hands up who knows about the Amazon one? Right on, okay, right in the back there, you wanna tell us about it? I can, oh you can, you go ahead. Go ahead, go ahead. Right there, I'm pointing at it with my laser. There he is. Basically Amazon hired a lot of people based on training data that was racist and sexist. It's basically on which schools they went to. It was supposed to not be racist or sexist but the data that trained it, the computer kind of did it again in a sexist way. Right, so the Amazon people, this is a classic case, designed this hiring system whereby they were hoping to find the best candidates for these jobs and they based this successful candidate picture based on all the successful people that came before him and this was taking into account the dot com era in the early 2000s, mid 2000s when computer science, engineering, technology was a very heavily male dominated field. So the computer learned, I always use inverted commas when I say learned because it's not sentient, I think. It learned that women were just not good at the job and so female applicants would be cast out of the HR recruiting system based purely on the fact that they were women. So obviously Amazon nixed this whole project, right. What are some of the legal issues here? Any lawyers in the audience? Yeah, you can just shout it out. You don't have to keep running around. I feel bad. Oh, I'm sorry. I can just parrot the answers out. It's a discriminatory. Thank you, discrimination. So if you get rejected from a job based on the fact that you were a woman, that's a lawsuit, right. That's potential for a lawsuit there. And fun fact, I started getting calls back and emails back about applying for jobs when I hid the words Oxford, Cambridge, Harvard, Yale, Princeton in white and tiny, tiny, tiny letters somewhere in the CV. I don't know if AI is probably too smart for this now, but work for me, right. That's not how I got this job. I wanna be very clear. I would love to talk about this. I really would, but we have to keep going. Anyone know this one, Tay? Okay, I heard someone say, oh yeah, go for it. Just shout it out. I think with, oh yeah, it was assigned to like interact with people on Twitter and the people on Twitter themselves trained it to be very racist and discriminatory. Yes, exactly. So Microsoft designed this thing called Tay. Tay was an AI chatbot that was originally intended to mimic the speech patterns of a 14 year old internet user. 4chan got ahold of this. And within about 24 hours, it was extremely far right politically. I won't parrot some of the things it said because I don't wanna get fired and I really love my job. But you see the point. They had to take this whole thing down, I think in two or three days. And that goes to show how easily data can be manipulated, how data can be misused. So we have a chatbot online spouting hate speech. Legally, what are we looking at here? Platform liability, for example. Elon Musk has notoriously removed some of the community guidelines that protected trans and non-binary people on Twitter. And one of the most far right Twitter accounts now has a blue tick on it. And one of the original sort of trans rights accounts has had the blue tick removed. So we're looking at platform liability, we're looking at hate speech, discrimination again. What about data protection? Your data that you're generating on Twitter is now being used to train something else. Is that fair? I'd love to talk about this. I really would, honestly, if I had all the time in the day I would spend hours on this. But we have to keep going. So we'll do a couple more. Black and indigenous people of color and medical technology. This one is really interesting because it has very severe real world consequences. The programs and the hardware that is designed to detect blood oxygen and the AI algorithms that are designed to detect melanoma, I believe it's called melanoma, I'm not a doctor, were based on training data largely generated by light-skinned people, Caucasian people, right? And so in order for a person of color to have a positive test for some of these conditions, their case has to be very severe. And this has a million consequences, right? This reinforces some of the disparity we have in access to healthcare. It's extremely discriminatory. And so we've covered this. And I would really love to talk about this. I really would, but we have another one. And this is, I think, the most interesting one of all. Combatting crime versus combatting color and racializing repoll models, pre-poll models. Pre-poll is predictive policing, so it's like the minority report for any of you who've seen the Tom Cruise movie. So this is a heat map of where certain crimes have occurred, and a lot of police departments will receive information like this and then redistribute their resources according to where they think they're the most valuable. The main problem I have with this is that there are only certain kinds of crimes you typically call the police for. And those crimes typically occur, and I'm being very generalizing here quite severely, they tend to happen in lower-income areas. And I'll use the United States as an example, because there's lots of crime going on over there and the police force is a mess, so it's a great case study. In the United States, a lot of these lower-income areas tend to be areas of color. So what you've done now is not only have you used this data to fund and to maximize efficiency in a system that is already fundamentally broken, but you're racializing crime, right? You don't call the police if someone's committed insurance fraud. You don't call the police if someone has been engaged in insider trading or bribery. You call the police if your car gets stolen, or if you get mugged, or if you get assaulted, or your home gets broken into, right? But the police still gets involved in the white-collar crime stuff, the financial crimes, the embezzlement. So the legal issues here are very clear. Someone flipped this on its head and talked a lot about the white-collar crime as it generated an AI model, so that turned out to be very interesting. And I would love to talk about this so much, but we really do have to keep going. So I wanted to talk about the risk taxonomy, right? The EU has this whole draft AI act that's come out and they've categorized AI thusly. Minimal risk, limited risk, high risk, and unacceptable risk, and I don't have much time left, so I'm gonna have to go quickly here. Unexceptible risk, all AI systems that can be considered a clear threat to safety, livelihood, or the rights of individuals like social credit scores, for example, right? High risk, AI technology is present or active in critical infrastructure, so like traffic lights and highway movement, things like that. Educational and vocational training, product safety, law enforcement, and there are a lot of counterbalances and checks and balances to make sure that these things are done as fairly as possible. So risk assessments, high standards of data set quality, high security, and human intervention, activity logging, et cetera. Obviously these are all subject to conditions and it depends very greatly on what you're using, how you're using it, and all of that. And I'd love to talk about that. And we have limited risk, transparency obligations, so if you're interacting with a chatbot, the chatbot has to say, hey, I'm a chatbot, you know, I'm not real. So at least the person on the receiving end knows, okay, this isn't a real person. And then you have a low risk. This is like video game AI and spam filters and things like that. This is a really time-sensitive issue. I'll try and use a metaphor involving rappers here. If you don't know who these are, I'm walking out right now. Two-pack on the west coast, biggie on the right. And in the U.S., there's something called East versus West Coast Code. And by the time the nerds in Silicon Valley have developed new technology, these guys are only just understanding the one that came before it. And as soon as the guys on the east coast in Washington D.C. have created legislative instruments to tackle the technology on the west coast, they've got 10 new things going on over there. So it's this constant tug-of-war in the United States, I'm using as an example again between the east and the west, trying to figure out how we can best clamp down on these technologies that are coming out all the time. And you guys have seen all the cringy U.S. politician videos trying to explain what's an internet, what's a TikTok, right? You guys have seen those. I've seen too many of those. This is a very time-sensitive issue. So treatments versus cures, I'm aware I don't have much time left. I'll go as quick as I can. Treatments versus cures. A treatment can manage something that has already manifest and a cure can eradicate it, right? There's no cure for cancer. There are treatments for cancer, chemotherapy, et cetera, right? There's no cure for cancer. Is there a cure for AI bias? I don't know. I'd love to find out. That's what half my research is about. What do you think? Show of hands, is there a cure for AI bias? Yes. No. There's no right or wrong answer. I just wanted to find out. So will you press the button? AI will be instantly and permanently debiased, but you'll have one sock that is always slightly wet. I don't know, you know. That's a tough call for me. Oh boy, all my notes. That's okay. So I wrote a couple of things down. I wanna make sure I read them properly. Do I need this? I do. So one at a time. This is one treatment for AI bias, examining content. Some industries use cases and they're more prone to AI bias, right? And having a prior record of what's going on and the biases that came before us can help us to inform ourselves better on what's to come. So that's one. Inclusion by design. Engage with social scientists and humanists and people who are involved in diversity and inclusion so that you know you're designing something that is to a degree future-proof, right? There's another treatment. Representative data sets is another one. You have to establish proper procedures, proper guidelines on how to pull the largest amount of people, the most diverse amount of people, because you want black people, white people, Asian people, queer people, trans people, non-binary people, abled and disabled people. You need to get a very holistic representative data set of the people you were interacting with in order to best cater to them, right? If I, as a person of color, go to the dermatologist and they use an AI that was trained entirely on Caucasian people, I would question the efficacy and the utility of it based on my skin, the physiological differences that I may have, the genetic predispositions that I may have, right? So we need to make sure that our AI is comprehensive and is inclusive. Human oversight, don't just let these things run rampant. Make sure there's someone standing over the AI to press the button if anything were to go wrong. That's a huge reductive, huge paraphrase, but I think the idea is there. And explainability, black box algorithms make this very difficult. Black box algorithms kind of come to these decisions and don't really tell anybody why. But explainability and being able to articulate the reasons why AI comes to this or that conclusion is critically important. This, the symposium is titled what authenticity in the digital age, right? I haven't talked much about authenticity, but in order to illustrate authenticity, I wanted to play a game with you guys with these images and this music and this artwork and these poems that are being generated by AI. It's often hard to tell what's human, what's not. And our children might not know who's human, who's not, what's human, what's not. Am I real, is this real, that real? And that's a question people have been asking for a very, very, very, very long time. Who am I, what's real, et cetera. Plato's Cave. So I wanna play a game with you. I wanna read a poem by Jose Frank and I'm gonna ask you all to cooperate with me. I'm gonna sit on this chair and you're all gonna close your eyes and you raise your hand if you think something applies to you. And you can open your eyes again when I say the word congratulations. You guys with me? Yeah? Okay. So this poem is called The Human Test. Please raise your hand if something applies to you. Have you ever eaten a booger long past your childhood? It's okay, it's safe here. Have you ever made a small, weird sound when you remembered something embarrassing? Have you ever purposely lowercase the first letter in a text in order to come off as sad or disappointed? Okay. Have you ever ended a text with a period as a sign of aggression? Okay, period. Have you ever laughed or smiled when someone said something terrible to you and then spent the rest of the day wondering why you reacted that way? Yeah. Have you ever seemed to lose your airplane ticket a thousand times between the check-in and the gate? You ever put on a pair of pants and much later realized there was a loose sock smushed up against your thigh? Have you ever tried to guess someone else's password so many times that it locked their account? Have you ever had a nagging feeling that one day you'll be discovered as a fraud? Have you ever hoped there was some ability you hadn't discovered yet that you were just naturally gifted at? Have you ever broken something in real life and then found yourself looking for an undo button in real life? Have you ever marveled at how someone you thought is so ordinary could suddenly become so beautiful? Have you ever stared at your phone smiling like an idiot while texting with somebody? Have you ever subsequently texted that person the phrase, I'm staring at the phone smiling like an idiot? Have you ever been tempted to and then given into the temptation of looking through someone else's phone? Have you ever had a conversation with yourself and suddenly realized how awful you are to yourself? Has your phone ever run out of battery in the middle of an argument and it sort of felt like the phone was breaking up with both of you? Have you ever thought that working and trying you was futile because it should just be easier than this or that this is supposed to happen naturally? Have you ever realized that very little in the long run just happens naturally? Have you ever woken up blissfully and suddenly been flooded by the awful remembrance that someone had left you? Have you ever lost the ability to imagine a future without a person that was no longer in your life? Have you ever looked back on that event with the sad smile of autumn and the realization that futures will happen regardless? Congratulations. You've all completed the task and you were all human. Thank you so much for that talk, Haroon. It was honestly, it was very interesting in the way of understanding AI and all the biases that come with it, which I feel does say a lot about authenticity as well. But at the same time, I really liked the poem activity. Some were really funny, some were quite sad, but it was really interesting. Anyway, we will now be open to questions for Haroon. So if anybody has a question, feel free to raise your hand and we'll pass your mic. Questions, anybody? Alec. No, it's fine. I was wondering about your research and in particular how your own research contributes or helps to diminish the bias or to contribute to inclusivity. Inclusivity, I'm very sorry. Since you were talking about this topic, what do you do in your research yourself? So can you hear me back there? I'm trying to get involved in a couple of different projects. The one I'm most excited about that hasn't quite gone anywhere yet is the liability of platforms in regard to the manosphere. Are you familiar with the manosphere? You're familiar with incels? The manosphere is sort of where all the incels hang out. And you all know who Endrutate is, I presume, right? So people like him and the effect that they have on young people, particularly young men, and whether or not online platforms like TikTok and Twitter have a duty of care to prevent hate speech and clearly dangerous dialogue from permeating into the wider depths of their reach. So that's one project. And another one that I'm hoping to work on in the near future is the, investigating the disparities in minority demographics, access to mental health support through the internet. Because in many cultures, especially my own, for example, I'm South Asian. I'm originally from Kashmir. And in our culture, if you're antiquated, you don't really believe in mental health. And I'm sure a lot of people of color can relate to that. You know, mom, I'm depressed. Oh, you're not depressed, you're just sad. Oh, you'll be fine, right? These are clinical things that people need to be more cognizant about. And I don't blame the parents because this is a very new conversation. But it just so happens that because of the communities they come from and the access to resources some people have, getting the mental health support that they need is impeded in some way. So that's something I'm looking at as well. I love to look more at the more intrinsic biases and the more algorithmic and the computer science aspect of things. I'm not much of a CS buff. I'm more of a legal academic. But I have hopefully a career ahead of me. So I'm looking forward to a wide variety of different projects. Right behind you, right there. Yeah. Yeah. I have a question regarding the copyrights of the data sets that the generative AI is using because from my understanding, a lot of data they use to train AI comes from user content on the internet without consent. So you talked about explainability and black box policy. So I don't know if it is in technology possible to just unravel the black box and label all the references and resources where the result is coming from. So for those of you who know, black box algorithms come to these conclusions and generate these results. And we know very little about the decisions that are being made in the algorithm itself. So that's probably one of the more autonomous types of AI that we have. If anyone knows better, please feel free to correct me. And I don't know very much about the copyrights and the rights that users have over their data and protecting it from being used to train AI. I don't have an answer for that. I'm sure many brilliant people in my department could speak with you for hours about it. But I can't. So I would advise you to come upstairs to the seventh floor and knock around or send an email because there are amazing people in my department. Hopefully, I'm not one of them yet, but we'll see. I think that's the problem. You gave some examples about AI and the problems it causes in society. They were fairly old examples, I think. What is a very recent, very new problem that has been coming up regarding AI and its implementation in the real world? The first thing that comes to mind are the beautification standards of TikTok. Does anyone know what I talk about when I say that? Do I need to explain it? I will anyway. On TikTok, if you are deemed to be conventionally attractive, your content is pushed a little bit more. And if you aren't, this is why my TikToks don't get any views, I'm joking, then your content isn't quite pushed as much. And there are obviously a lot of tech things going on behind the scenes to determine who is and who is not attractive. By what standards are they measuring this? Are users consenting to having their data processed in this way and so on? So that would be a more contemporary example, I think. If anyone has any better ones, then I'll be around for a while after this and I'd love to hear about them. Okay, well, thank you so much. We are gonna be doing a little, I won't call it an activity, I will say it's a little fun thing. We're gonna be doing next. We are going to be asking ChatGBT what it thinks about how we can be authentic in the digital age. And let's see what ChatGBT has to say about it. It wouldn't be a symposium about authenticity if we didn't bring ChatGBT in, right? I don't know how many of you can actually read that, so I will read it out loud anyway. Being authentic in the digital age can be a challenge as we often present, even I can't read this, curated versions of ourselves on social media and other online platforms. However, there are some steps you can take to cultivate authenticity in the digital world. Be true to yourself. It's important to know and be comfortable with who you are. Don't try to be someone you're not and don't present a false image of yourself online. Share your own experience. Share your personal stories. My vision's dying on me apparently. Share your personal stories and experiences in a genuine way. Don't be afraid to be vulnerable as this can help you connect with others on a deeper level. Engage with others authentically. I'm just gonna read the short version now. Use social media responsibly and prioritize face-to-face interactions. All in all, very basic information but some solid advice from ChatGBT. And with that, we will be moving on to our next speaker, Mr. Satad Mahautra. He is currently a PhD candidate at the Technical University of Delft and he focuses on building human AI trust. So with that, I would like to introduce him on stage. Okay, people at the back, can you hear me? Perfect. Okay, so a brief introduction about me. I'm finally a PhD candidate at Delft University of Technology. I work in the Interactive Intelligence Group and my background is in computer science. So bachelor's was in computer science, master's was in computer science, but my specialization is in human-computer interaction, which is basically related to how humans interact with computers. And I opted for this one particular talk because the topic was so fascinating to me. Okay, so how it actually connects to the part on the trust? So before we key deep dive in, this should work. Yeah, can you please just join the slider? I have a couple of interactive questions to ask over there, so that might be interesting. No, it's not going to play any YouTube video, so it's just gonna be any polls. Three, two, one, okay, yeah, there. So just to gauge the audience itself here. So what are you currently studying? So if you are teaching, then it could be the field that you're teaching in. Okay, interesting. Okay, I stick to this part then. So we have a good combination over here, and this is what is really interesting to me, especially from the part of communications and media, because the research that I do on is related to interactions with the humans, right, from the computer perspective. So it's really interesting. Okay, so this work is in collaboration with my supervisors, so Professor Kathleen Yonker and Assistant Professor Dr. Mirtha Tilman. So let's start understanding the part about the authenticity. One of the things that you see over highlighted over here is a trustworthiness, right, which is a very similar concept to what authenticity could be. And if I just go by with this book from Animaris, then authenticity is one of the key pillars of trust itself, right, and others which are related to empathy and logic. So by going this perspective of authenticity, which is like honesty to a certain extent, and also about how much I can trust you, because if you are being authentic to me, then there's a good possibility that I can trust you, right? So those are the things which somehow gets related on a one scale. But we need to understand that trust in AI is altogether a different thing, right? So when I trust somebody, let's suppose a stranger, then the first thing that I have is like, I have a certain amount of uncertainty, which is like, oh, can you really trust this person? And at the same time, when you're trusting somebody, then it is like there's a certain goal which is in your head that you want to accomplish, and it always goes with some vulnerability. There's a risk associated with it, right? So there are two key components of trust in AI which is related to uncertainty and vulnerability. So the element of risk is really important while we are making anybody trust it in one particular system, and at the same time, there needs to be some uncertainty. So the talk will follow from here in more technical sense, which will give some sort of notions that what the previous speaker did discussed about some biases in AI and how we can overcome that, but it will be more from the technical side. Before that, I talked about appropriate trust that was the title of my talk, so designing for appropriate trust. What appropriate trust is in the first instance, right? And what I'm really talking about, okay, trust is one part, but what is appropriate trust, right? So there are some definitions from computer science perspective which is related to follow and correct AI recommendation or not to follow and incorrect recommendation. So if you follow some correct recommendation from AI, then it is appropriate trust, right? And the other cases which leads over and under trust, which is somehow more consensus, kind of like definition in AI terms, mostly from the computer science community. Keep it a mold. Now go from the perspective when the people from psychology or even human computer interaction target this. So it's related to this part of trustworthiness. So here I'm not talking about relying on some AI recommendation or not relying on AI recommendation because that's my reliance and that's not exactly trust. If I'm talking about the trustworthiness of other person, then I'm actually not just focusing upon reliance. It could be other things also, right? How honest that person is. How emotionally I can connect with that person. Is that person is having some inherent values that are very similar to me, right? So those are some distinct perspectives compared to what computer science people have already looked at. And this is from philosophy, which is like you have some justified beliefs that this other person has some suitable dispositions. So justified beliefs are just right. Okay, I'm interacting with you and let's suppose we have been friends together for five or six years. Now I have just justified belief that, okay, if you're gonna invite me for a lunch, then it's sure that you will be there to some certain extent. And so table dispositions is that it's gonna create some sort of an emotional link between both of us, right? So that is coming from philosophy and see this is vast amount of variation in what appropriate trust is. And in terms of AI also, we are really not sure. To a certain very simplistic, essence we can understand, if you're not over trusting someone, if you're not also under trusting someone, then it is appropriate trust. Then going back to why this is really important, why we need to have appropriate trust. The same example I've used, it's from Amazon, right? So the people over there in the Amazon team, they actually over trusted their system and that eventually resulted into bias, right? And the same thing happened with predictable system systems or the surveillance systems. Too much of over trust into these technologies. But at the same time, what we can also learn, we need not to always under trust it, right? So there was this train accident which happened and the driver ignored some of the signals, right? So technology is not always harmful to a certain extent if you try to understand on a very basic level, right? So that somehow links together the facts that you're not going to over trust it, you also need not to under trust it, right? So what do you think is the biggest challenge that it comes to trusting your systems? Maybe you can use the same slider that you get bias. We already have talked about it on a larger scale, I suppose. Exploitation, limited understanding, bias data, no face-to-face connection, indeed. Ignore others, okay? Check control outcomes, black box, indeed. Contains the motions, of course, and the previous experiences, lack of transparency. Okay, I'm just gonna move ahead from this. So you already have different variety of reasons that when it comes to trusting AI systems, there are X amount of challenges, right? But what we can actually do about that is there is something from the perspective of going back into the data or trying to understand this and how actually people are interacting with these systems, right? So the research question that I worked upon in my PhD was related to, how can we design AI systems so that people can appropriately trust it? And at the same time, how we can verify it. So if you are the one who are trusting the AI system, is there a possibility for you to verify that whether the trust was appropriate or not? That's an important step, which links together the second part, how we can design those human AI interactions. So if I'm interacting with you on a certain scale and at the same time, from the perspective of developer or design of that system, I already know, okay, I need to design for appropriators, I just need not to just put in the data and have some result commit out, right? So what we did was just kind of like went bit into the history to understand if this is a new challenge to us from the computer science perspective, but it doesn't seem so. So if you see over here, that as this last around 10 years of data, which is like the last decade and there are so many papers which have worked upon this topic of appropriators, not be the exactly same keyword, it could be related to calibrated trust or trust calibration or optimal trust. But yeah, this topic has already been in research for so many years. And at the same time, there are various models which was firstly used in decision aids, automation, robots, virtual avatars, chatbots, now a large language models, right? So it's not a common challenge, but it has been throughout the history itself, right? And we are still on that phase that okay, what we can do about it. And that somehow results into bias in the AI models, right? Once we know that okay, at a certain angle, we need to be very sure that okay, we need to be skeptical at the point whether AI is making a certain decision to us. But at the same time, we need to be very sure that okay, this is how the model is being trained on and I know that this time I can trust it. So one of the key elements that I've looked into my PhD is related to integrity of the AI systems. So we know that the first part which is related to ability is gonna enhance and enhance. So we talked about large language models. GPT-3 started, now it's ending up with GPT-4, many other things going on. At the same time, I hope that introduced text to image, which is like okay, this is Firefly thing, you just type in and the things are coming out of that. But what about the integrity of the AI systems? We are just increasing the capabilities of AI models. But at the same time, if we need to understand the three pillars of trust, one of the things which is related to the trust is the integrity of the systems, how fair they are, how honest it is, how transparent it is, right? And those are the notions we have already talked about. Benevolence is something which comes mostly into the human-human relationships, not exactly with the AI systems, until and unless it has a visual avatar. So it is related to having this emotional connect. So if I know a person from so many years, then it's somehow a good extent that I'm going to trust that person. But in terms of AI, let's suppose, we still need to figure out the integrity element of it first, then going beyond the benevolence, right? So what we did was trying to understand if we create explanations by the AI systems which are based on integrity and let people interact with those systems and understand whether people are going to have over-trust into these systems or whether they're going to have under-trust into these systems, right? And we picked some three key elements of integrity which are related to honesty or authenticity, you can say in this context, transparency and fairness. And can they result in appropriateness of trust or not? So what we did was, we just gave this one sample of a food image to our participants and we asked them, okay, you just need to estimate how many calories does this food plate have. And we gave them four different options. And you also need just to rate how confident you are in your own decision. So suppose let, okay, you say, okay, this dish has 360 calories. Now it goes at the next step that you can then ask an AI assistant to help you with this task, right? And at the same time, the AI assistant gives some explanation. It says that, okay, it's 610 calories maybe. So the one you selected was 360 and in the end you said, okay, it's gonna be 610 from the AI agent perspective. And the system already gave, okay, these are the key ingredients in this dish and it has this certain confidence that, okay, I'm 90% confident that this particular food plate has some tomatoes, for example. In the end, what we asked is to either select yourself or just select the AI indeed, right? So there's an option to you. You can just go with your own selection, whatever you said, or you can just use the AI part, right? And people did so many crazy things out here. So they was like, okay, the explanation is not so robust. So I'm going ahead with myself or maybe the explanation is too detailed and I'm so sure that the system explained it everything perfectly. So they went ahead with themselves and we asked in the end, okay, how do I rate your confidence and how the explanations are really useful or not. And then they were just the subjective trust levels, right? And the results of these experiments was so vast and diverse that we're going to look into. But at the same time, if I go back one slide, then I talked about three different principles of integrity, right? So like authenticity, transparency and fairness. These are three different principles that we explored and the explanations that the system provided over here are related to those three principles. So let's suppose the explanation was in the part which talked about nothing. So it lacks a reference. It just talks about, okay, this is the result and I can tell you that this plate has 600 calories. This is just a baseline condition which sometimes happens in AI systems, right? It just gives a recommendation. What next can happen? It can talk about honesty. So it talks about its capabilities and it can also talk about its confidence. So let's suppose it's 70% confident that this is tomato and it also tells that this one particular information is truthful. Why it is truthful? Because it has been going through a different process of annotators. And annotators have said that, okay, this one process is particularly truthful because this dish has tomatoes. Next what we did was we had explanations focusing upon transparency. So we talked about the process of decision-making. So how it actually came up to this notion that, okay, in the end, there are 600 calories in this dish and the focus was on inner workings of the system itself. And the third was related to fairness, whether there was any risk which was existing into the results, right? And whether the data is discriminate or not. So on those levels, there are three different type of explanations that we provided. And we somehow linked this to the mathematical foundations where we try to understand whether the trust was appropriate or it was over trust or it was totally inconsistent behavior. So what do you think could be one of the key explanations which could have resulted into people having more trust or I would say appropriate trust into the systems? So we provided three different types of explanations and the one was just the baseline. Interesting. Okay, I would take that that majority is going ahead with the formation of transparency, right? Now let's dive deep bit into the results. So the transparency condition over here, if you just try to relate it to appropriators was very similar to the ability condition. Surprising, but what happened in this case? If you talk about fairness that people are mostly being aware of, right? So here we are talking about AI is making biased decisions. It is coming up with own biases and maybe the food that you see over there, it was annotated by people from western population but the distance from the south, right? So those are the different things which comes up while giving an explanation which is linked to the fairness part and the honesty and even the ability notions over here are very different. But there's a caveat over here. When you increase the fairness of certain thing it also increases the over-trust. So you talk about fairness, okay, this one particular explanation is talking about fairness in terms of explanations. The over-trust part of it somehow also increases. Now let's look at what people said indeed. So we asked them over 15 different rounds that okay, how does your trust change? And whenever the system provided a wrong answer their trust dropped, of course. It is like I'm giving you a wrong answer and after that I ask you, do you trust me? It drops. But substantially when the system talked about honesty it was bit higher even when the trust dropped. So these some kind of notions show that for the condition which is like baseline or even compared to three different perspectives of the explanations, explanations sometimes do help in making people inform decisions about trust in the system itself or not. And this is one of those concrete proofs that say that okay, you need to make a system appropriately designed in such a manner that people are gonna trust it and be skeptical when need not to be trusted. So these are some of the key results which talks about whether the previous correct answer was there or it was on the wrong scale. And what we aim to contribute with this one particular study was there's this formal computation that we can do in order to understand whether the people are trusting the systems as appropriately or not. So rather than focusing upon, am I being more skeptical or am I being more under trusting this one particular system? And at the same time, we need to know that the expressions of fairness principles are really important because the high which is around AI is there and the system tried to explain its own biases and somehow say that these are the biases I'm currently trying to deal with can actually help people to make clear trust in the systems more calibrated. And finally, the subjective trust is always something which builds up after certain amount of interaction. If you just interact with ChetGPT for the first time, you'll be like, oh, amazing, super, what's happening over here? But after three or four interactions when you interact and give different rounds of questions, then you'll be actually like, oh, okay, maybe it's not working that great that I was thinking in the first instance, right? So trust doesn't just build up in one or two interactions. It requires different amount of time in order to build it up. And finally, here are some of the selected works if you are interested in knowing more about how to design for the appropriate trust and I'm happy to take questions. Yes, thank you so much for the talk. And yes, like you said, we will now be open for questions. So just raise your hand and someone will come with a mic. I was wondering, the findings that you have are very interesting and certainly could, I think be helpful in heightening the trust that people have in AI when it is justified, but what exact incentives do actual implementers of such technology have given that they are usually institutions based on exploiting both consumers and producers? Indeed, thanks for the question. It's an interesting one. I think the overall approach that we followed was rather than relying on any of the data sets which are already there, let's come up with the one which is makes on the principles of the responsible AI. So we come up with our own data set. Well, let's have annotators, let's suppose only 50 annotators. We need not to have a larger data set of more than 100 MBs. So we come up with our own data set and we know that this one particular data that we are feeding into the system, so no garbage in and garbage out. That's what we are dealing here with. And if it comes with the notions, if you're designing a system and you know that you need to make it responsible, you need to make it ethical, whatever actually you're putting into it. So what we did was, we annotated those dishes that we were really sure of. So I'm coming from South Asia and I know that, okay, these one particular food plate is having this much of calories because it's deep fried. It might not be the same information which is available to a person sitting in US. We'll be like, okay, I know about this dish, I've eaten it multiple times, but in terms of calories, maybe it is baked. But eventually it could have been fried, right? So those are the different things which you need to understand in order to design for an appropriate risk, right? And the data which you actually put it into is what the importance of the trust comes here. We're gonna do a little AI generated art because it does relate to our next speaker. So if you could all just throw in a suggestion, like what do you wanna see an AI art generator create? And I think you should also be able to vote on other people's answers. So you could do that if you don't have anything you wanna add yourself. Crab pianist is definitely a new one, frog lifting weights. I've put a timer on this because otherwise we're gonna have like a million words. Okay, so that's all the answers we have and I'm gonna pick one. I can't get crab pianist out of my head, to be honest, I think that is definitely a funny one. So we've got crayon and I'm gonna type in crab, an art and draw and it's a drawing. So while that's happening, we are gonna start with our next speaker, Dr. Eric Postma. And he is going to be speaking of ha ha ha, AI generated art. Thank you. Okay, thank you very much. So I'm a professor in artificial, what am I? Oh, yeah, no, I see the affiliation from the Tilburg School of Humanities and Digital Sciences, but I'm a professor in AI. I'm also part-time affiliated with YATS, which is this Institute in Den Bosch, which is a collaboration of Tilburg University and Eindhoven University of Technology. And I will talk about authenticity in the AI era, regarding a project that I'm involved in actually since I think 2000 or so. And it's about art. So this, you might recognize this segment of a famous painting by Vincent van Gogh. And throughout my life, I'm interested in pattern recognition, which is a bit of an old-fashioned term, but it's actually the question, how we recognize patterns. That could be how you recognize faces or objects, et cetera. And that's a big mystery. That's our brain is doing that. Our brain is generating perceptions and our awareness of perceptions. And there are all kinds of discussions about how theoretical this is. In other words, do we see the real world as it is or do we create an internal hallucination? But at any rate, what artists like van Gogh are doing is they're manipulating our perception to achieve certain emotions or appreciations of their art. And I think that's an interesting thing. And that's why we study since 2000 this, from the perspective of AI, where we try to capture and to model. So these are two sides of the same coin. Capture means we try to mimic how art experts see paintings and how they recognize, for instance, if they see a segment like this, that they recognize, oh, this is a van Gogh. And at the same time, so you could imagine you built an authentication machine. That was not my interest, but in principle, you could do that. And on the other hand, you could see, okay, how does the brain recognize these patterns? Now, when I coined this idea, I had a colleague, I thought it was a silly idea. And my colleague said, it's a great idea. I said, now it's a silly idea. Who is interested in that? He said, you should apply for funding. I'm sure it will be funded. And ultimately, I applied for funding and we got funding for this project and it turned out to be very media savvy. Because before we did anything, I already got an offer for 100,000 euros to give my algorithm to an art dealer. And even when I said, I don't have an algorithm, the art dealer said, but I still pay you 100,000 euros if I can say that my paintings have been analyzed by your algorithm. I was a bit surprised, but now I understand how pervert that this art world is. Of course, I didn't accept this money. But what we did instead, we went to the Van Gogh Museum and we talked to art experts there because I was interested in the question, if you see this, how do you know that's a real Van Gogh? And you know what the answer was? It's the rhythm. Do you see the rhythm? It's the rhythm of Van Gogh. And rhythm is something I associate with drumming or music or something. But he says it's the rhythm of the brushstroke on the painting. That was the way that one of the experts at the Van Gogh Museum recognized the features of Van Gogh. Now, initially we used what I call traditional machine learning. So for those of you who have statistics, it's like logistic regression and then more sophisticated things. Now, in recent years, we of course have the so-called deep learning revolution that I will come back to in a moment. And they boosted the prediction performance and also maybe the modeling power of these systems to see how our visual system is doing this. Now, the whole controversy started with a question about Van Gogh's sunflowers. There are three copies of the Van Gogh sunflowers and one of them was considered to be a forgery according to an amateur expert. And it was quite a thing because the Yasuda Insurance Company in Japan, one of the major funders of the Van Gogh Museum bought that alleged forgery for several million dollars. So it was a very painful thing. And then I discussed this with the Van Gogh Museum and said, would it be nice if you would have a machine that could determine the authenticity of the painting? And they said, yeah, but that's impossible. I said, yeah, it might be impossible, we could give it a try, isn't it? So we wanted to give it a try and we hired a PhD researcher. And one of his first actions was that he was on the front of the art supplement of the New York Times because as he said, this is a very media savvy thing, but we didn't have anything. We had no results, just the idea. And so the idea was to use computer vision technology, a special technology to analyze images to see if we could find cues like the rhythm to demonstrate that a painting was real or fake. And you have many Van Gogh paintings, not as many as I would like to have, but there are many of them. These are two copies of the same scene created by Van Gogh. And what we know that if you happen to live in Brabant, which is this province, there are a lot of people that seem to have Van Gogh paintings that they find in the attic or something, and they bring that to the Van Gogh Museum. And if you do that, you get expert judgments. And expert judgments are letters. So you suppose you have this million dollar painting in your home, you send it to the Van Gogh Museum, and the expert assessment could be, it has a rapidity of execution and a lack of hesitation, which is a kind of an attempt to describe the quality of the painting, or it has the rhythm of Van Gogh, as I said before. If you're not so lucky, then you get a letter that says, it has an unnatural blue U, which is not characteristic for Van Gogh. And all these frustrated people that were frustrated by these messages from the Van Gogh Museum, they came to us and they showed us their paintings, and we made digital scans of their paintings. And so we have a large collection of non-Van Goghs, although they believe they were Van Goghs. Now, this characteristic feature of Van Gogh is this rapid execution and this repetition of brushstrokes. And this is a famous painting because it shows this rapidity of execution and this rhythm of brushstrokes. But this is actually a forgery, a very well-known forgery. It's called one of the Walker forgeries, named after Otto Walker, who was actually, I think, the brother of the real fordurer, who tried to sell these paintings in 1932. It was a whole series of paintings. And of course, people were very eager to get a Van Gogh because that is value. So actually, this has nothing to do with art. This has to do with money, this capitalist tendency. But that's an important factor in the art world. Now, what we discovered is that if we compare all these, you see here six versions of paintings by Van Gogh and one Walker forgery, with relatively simple statistics, and I won't go into the details, but something very simple to do with computer vision algorithms, you could easily detect the forgery. And we collaborated with two U.S.-based teams and the Van Gogh Museum. And we found that this was indeed a fake. And it turned out that with very easy means, we could actually show that all these Walker forgeries were fakes, so that it kind of an objective measure of the painting. Of course, we all realize these methods do not understand the painting at all. They just extract statistics from the visual structure. They don't look at the underpaint. They don't look at the canvas. They don't look at the aging of the wooden frame. They only look at the visual pattern. So what I always say is that these algorithms, whatever they can do, they can never outbeat a person, an expert that looks at the image, because sometimes the thickness of the paint is an important clue, but of course we don't use that. Now here you see a picture of the trial that went on in 1932, where it was decided that this was a forgery and now we have technology that can do that. Well, now this was in 2009, I guess? No, 2005 somewhere. And so with all technology, you could already do that, but unfortunately it didn't work that well for all paintings. One experiment we did with these two other teams in the US was that there was actually a TV program in the US called Nova Science Now and they wanted to hire a British forgerer to create a forgery and then the task for us and the teams from the US was to automatically detect the forgery, but it turned out that the Van Gogh Museum did not allow to do that. So they hired somebody, Charlotte Caspers, who created this copy. So this is the copy. Oh, wait a minute. And this is the original. So do you see the differences? Of course there are subtle differences there. This had to be done in a hurry, so when we were scanning these paintings, the new one was still wet. The painting was still wet, so it was a kind of fake test, but later on we were also able using the same methods as for these Walker forgery to show that this was not the real Van Gogh. And if we look at this, these are details that are hard for us to see, but apparently a stupid digital algorithm can do that, but of course it lacks the knowledge of the painting. Maybe it's even distorting that we recognize the similarity of this scene, and because of that we miss these important cues that reveal the authenticity or lack of authenticity. Now, as I said, we had a revolution. We have many revolutions in AI lately, but so about 10 years ago we had this so-called deep learning revolution. So that's the revolution where Google and all these other companies were able to automatically label pictures. So before that you had to find pictures by matching their descriptions, but now we have technology since 10 years that we can do that automatically. The same is for speech recognition, had an enormous boost, and it was all thanks to deep learning. And we achieved a typical authentication performance if we looked at all the Van Gogh and other artists where we applied this technology of about 75 to 80%. And with the advent of deep learning, we use these convolutional neural networks, sometimes called CNNs, not to be confused with the broadcaster. And what these networks can do, they take an input, an RGB image, like it appears on your screen as input, and then processes it through different stages, and each of these stages is looking for small details in the image. And this is a more sophisticated one, but that's a bit intimidating. So this is an animation that maybe makes it more clear. So if you assume on the left, that's the image, and images have pixels, and each pixel has a value. Actually, color pictures have three values, but let's assume it's a gray-valued image. Then each value gives an indication of the intensity. So zero is black, and 255, depending on the scheme, is white. So the larger the number, the brighter the pixel. And this three-by-three square that you see moving over the picture is a kind of feature detector. It looks for certain patterns of numbers in visual terms, visual patterns, and it's measuring these patterns anywhere in the image, and then it sends it to the next stage. Now, this is all is happening there. Now the question is, what kind of patterns should it look for? Now, the idea with this deep learning, these systems here, is that they automatically learn what pattern should be detected. And they do that by giving them a lot of examples, or for instance, dogs and cats, and then it can recognize the specific features that are characteristics for dogs or cats, or you can show them a lot of fake and authentic Van Gogh paintings, and then these systems learn it automatically. Now, one of the problems that many art experts have is that there are many artworks where not one, but multiple authors contributed. So here you see an example of a painting by Peter Paul Rubens and Jan Brögel, who were good friends in 1615. And Peter Paul Rubens painted the Adam and Eve, and the tree, and all the small animals and details were painted by Peter Paul Rubens. So, and it's actually, we did an analysis on this one, since it's very easy to show which part is painted by which painter, because they use different brush strokes. Of course, if you paint small details, you'll use a smaller brush stroke. So that's actually, well, what we put almost trivial to do. But we also work with the so-called Rags Museum Database, which is a huge database of digital reproductions of artworks. And my PhD researcher, Nanna van Noord, did applied his convolutional network, so there's deep learning to this data set to see if he could automatically determine that, for instance, if you have a painting by Rembrandt, that it associates that with the proper author and not with Van Gogh or Monet or something like that. And it turned out that he was very well able to do that. This is, I think, it's almost 10 years ago. So we did, this has already improved now, as I will show in a moment. But what you can see here is that the diagonal indicates all the cells where the predicted artists correspond to the actual artist. So the more coloring in the diagonal, the better the performance. And this is about more than 80% on average correct. Now, one of the examples in this data set was an image created by father and son, Jan and Kasper Leuken. And what Nanna did here was perform an analysis of who was responsible for which part of the artwork. And this was the result. So Jan, I think Jan was the father, is yellow. So the yellow parts were created by Jan Leuken and the blue parts by Kasper Leuken. So this was an early attempt to show that you could visualize the model created by this deep learning algorithm that responds to specific features of each of the artists within one painting. And this is the way you should imagine that art experts and art historians can use this technology. Because nobody is interested in a machine where you put a painting in and it says, oh, this is authentic, or oh, this is a forgery. You want to know why. And this is one way to visualize what this network is actually doing. And then, of course, the next step is generating some explanations. These could be visual explanations to indicate which parts or which features are important. It could be brush stroke. It could be the pigment that is used, et cetera. Now, the tendency in this deep learning is that these networks become bigger and bigger. So you're all scientists. So you know about Occam's razor. Who doesn't know about Occam's razor? Nobody dares to raise, ah, there's one. Very honest of you. Occam's razor says, that's the principle that if you want to have an explain something, a phenomenon in nature or in whatever, then you can come up with a very complex explanation. So for instance, I see some light moving in the sky. Then I can come up with an explanation. These are aliens that are coming to visit Earth, which is a plausible explanation, but it's not very likely. And it requires a lot of assumptions. Whereas if I say, oh, that's a meteorite, that's much more likely. So Occam's razor says, in terms of statistical modeling, if you use a statistical model to model data, use the smallest number of parameters needed for that. And the smaller the better. And that describes the data, of course. And the same is true for explanations. Stick to the simplest explanation that requires the least number of assumptions. Now, these CNNs, these convolutional networks have two tunable parameters. These parameters are set automatically and these define what features are extracted from the images. And here you can see the number of parameters in millions. It's millions of parameters. So I once gave a lecture to an audience of statisticians and you could even see on their faces that they hated this. This amount of parameters. This is not what you do in statistics. It's everything that is ever forbidden. But AI researchers do not obey the law that you just tried and it's turned out to be very successful. So there's an enormous tendency to increase the number of parameters. And the same is true for instance, for things like JetGPT and all these large language models. There's a tendency to grow these numbers in terms of billions to trillions of parameters. And the assumption is if you keep on growing these models they will become better and better. Now, the point is you might have all heard about GPT-4. It's not clear how many parameters GPT-4 has because it's not open. Up to GPT-3 it was kind of open. We know what the size of the networks was, it was growing. But for GPT-4 we don't know. It's hidden according to OpenAI, which is a strange name for a company that is not very open. But according to OpenAI, they do that for security and commercial reasons. There are no person to just give in. But these greater networks, these larger networks perform better. And this is an example of one of, it's just each module contains lots of parameters but these are very, very complicated models. And what we are trying to do is apply these models to authenticating these paintings. For instance, we partition the image or we use the whole image, we experiment with that. This is part of a collaboration with a Swiss-based company that actually started I think six years ago and six years and they called me if they would be okay, if they would commercialize my project. And I said, yeah, good luck. I'm not interested. I'm a scientist. I'm not an entrepreneur. So they did that and they earn a lot of money and now we collaborate. I don't get money. I don't want to have money for that because I don't want to get involved in all these discussions about authenticity. But we collaborate because they have the data, a new data and we have developed the algorithms. And we use this largest network that they just showed and we got 85% recognition accuracy which indicates how powerful these algorithms are. There's a lot of pitfalls and things that I can say about the validity of these kinds of classifications if you really are interested in the authenticity. But as I said at the beginning, I'm not interested in the authenticity. I'm interested in the pattern recognition capabilities of humans and how we can approximate that. Now the latest kid on the block responsible for all the first that you now read about is generative AI and GPT-4 and all the GPT systems are an expression of that. And they're based on transformers and transformers are specific modules, not very complicated. Last week I could even explain it to politicians so I think then you could also, I won't explain it now but it's very simple. And these models are called large language models and they are trained by means of self-supervised training. So in the textual domain it means you present part of the sentence and they have to predict the next word. And you might have heard that because that's kind of at the heart of all the discussions. And they originated from the language domain but now we have their variants in the vision domain and they also lead to a great strides there. So the vision transformers as they are called take instead of partial sentences, they take partial images and then try to try to predict the rest of the image. And by using them, and this is an example, the swing transformers, you can do segmentation, detection and classification and we achieve a 10% increase or improvement in the accuracy in classifying forgeries from authentic artworks. And this is a kind of overview so you had the traditional machine learning up to 2009. Then the last 10 years roughly we had these convolutional networks and now we have the transformers both in language and in vision. And they lead to an increasing problem in terms of authentication to determine if something is authentic. So if you step away from the art domain and you look at fake videos or fake images and fake text it's harder and harder to distinguish them. So and the last is of course a counter-offensive and these are the diffusion models. Again, generative AI. And if you look at an example here, this is an example where these systems are trained on images and you add noise to it but each step from becoming noisier as shown in the top panel is associated with training a model. So the model trains to learn how you make a sharp image more and more fuzzy but also in the reverse direction and that's what you see on the bottom panel. So the idea is you go to complete noise, Gaussian noise for the expert amongst us and then you use the learned model to create a new image. And these diffusion models are being used in Delhi and mid-journey and all these other stable diffusion, all these things. And it's now also integrated in GPT-4. So we can ask for an image and it's generating something according to this prompt. Now this means that you can also generate artwork. This is an example. I think this is from OpenAI, the Delhi system. But this is what you can create with stable diffusion and it's not the best but it gives an idea of what it can do. So the only prompt you give here is Sunflowers by Van Gogh. Now, I think there would not be an expert who thinks this is a real Van Gogh but if you do this multiple times and you tune it a bit, sometimes you get things that are really hard to distinguish from a real Van Gogh. Of course, it's only a 2D image. It's not a real painting. Here you see another, a self-portrait which actually looks very much like a real Van Gogh. So it's easy to create authentic liking images. And that means that the boundary between what is real and not real is shifting. Personally, I'm not concerned about that with respect to artworks because I think this whole discussion about what is authentic or not is not really meaningful. The expression in the painting is what is authentic. And I had this discussion with the art historian from the Van Gogh Museum and they said, it's not the painting. It's the whole story around it that's of importance. And of course, we do chemical analysis and we try to determine the aging of the painting, etc. But it's the whole artwork that makes it unique. And I think that point generalizes a bit to all domains where AI is invading. Ultimately, you can generate fake faces as we just saw or fake stories but the authenticity is a thing that is much more important for humans. And so I think the fate of authenticity in the AI era will be that the number of fake text and images and videos will also grow rapidly. And authenticity will in increasing matter will be associated with the real physical object. So of course you can visit the museum on the internet but it's different from when you are really looking at the Mona Lisa, although it might not be the real one in the museum but that's another story. And trusted sources become more important. So we can now in this age of disinformation you can get all kinds of stories but you have to rely on trusted sources. This is one of the challenges that our society faces. And on the positive side, I think this generative AI is a great tool in the hands of human artists or scientists to enhance their capabilities and they're already great examples of that. And I think that the proper integration of this generative AI either in the art world or in science or in society is a multidisciplinary effort. So this is not something for AI researchers only. This is something that every discipline should contribute. So whatever you study and whatever you're interested in it's unavoidable that this will change our society. Not that we will be taken over by computers. I'm not Elon Musk. It's a slower process but an important and very disruptive process and we need experts to guide that. Thank you very much. Thank you so much for this talk. I found it very interesting. We will now be open to questions from the audience. So if anybody has a question feel free to raise your hand. How do you think that the prevalence of AI in the fine arts will affect the art market in the future? And I know that artists like Rothko, Pollock they're objectively to halantless artists because anyone could make that stuff but not anyone made it, they made it. Exactly. The fact that they made it is what we as a society ascribe value to. But as the market of fine art becomes more and more saturated and more and more AI assisted how do you see that impacting the art market? I think that for the reasons that you explain I don't see big effects because I think the authenticity will be valued more than it is now. Of course there will be a lot of fake art, et cetera as there is now by the way but these systems are not capable yet of creating kind of artifacts in 3D. Of course you could have a 3D printer but the aging, et cetera is very hard. But I think authenticity will become more important. On the other hand you will have many artists and you already see that in graphic designers who will experiment with this technology and this will become yet another tool in the hands of artists. But that's another branch of art. So I think for the fine arts I'm not afraid that it will have a major impact. Okay, maybe one more question from the audience. Have you ever put a Fango like the one that you presented before, the fake one? Have you put it to the test? To whom? Well, you told us about this way to predict whether it's a real Fango or not and I could imagine that it becomes so intelligent that the fake Fango could be made exactly the same. Yeah, so there are practical problems with that but Fango experts know all the paintings that are available. But the other side of the story is is that some paintings are alleged authentic and it might not be. And we have some where we see that they really deviate from the other ones. Well, maybe he drunk too much then or you don't know, but at least there is a suggestion there is something odd with this painting. But as I said, this is a very sensitive area because all these museums are very scared of having to devaluate their paintings. And that's deliberately my choice to stay away from that because I'm a scientist. I'm not interested in these things and that's why I'm happy that I have this company taking care of that. Of course, they also make a lot of money but that's their thing. I don't care. Okay, thank you. Okay, great. Before we head into the panel session segment, I would like to see what our AI, our generator was up to. So let's see if that did anything. We have some very fancy ones. This is, I think, the, we have some, okay, this is very disturbing. But yeah. I will say though, despite how obscure a lot of these are, the crabs are in like ultra high definition for some reason. But yes, we will now be heading into our panel session. So I would like if all the speakers could come on stage. So the first one is that a common topic that comes up when discussing authenticity is deepfakes. Deepfakes are commonly used in ways that spread misinformation such as for fake news or used in the defamation of public figures. I should have phrased that better. Do you think that there are beneficial uses to deepfakes and how could they be regulated and used in ethical ways? It's a mouthful, but I don't know, maybe. Yes, that's the first part. I cannot say yes to our good users for that. So now we train doctors to interact with fake patients to train them. So that's one example. Media, movies, et cetera, all kinds of... So I can think of a lot of good uses of deepfakes. And the second part was how could they be regulated? Well, my general answer to questions about how to be regulated, I'm not an expert on that, but I think that's a multidisciplinary effort to think about how should we deal with that? What should we prohibit? What should we facilitate? And I'm not an expert in this domain. Anybody else? I think you add. So just to add on this part, there's just one project which is currently going on in DELF that is related to the classification of the older movies, especially for visually impaired people. So what happens is that it's a really old movie, let's say, from 1960s. So if you use a deepfakes on top of that, just for audio, so the audio which is currently 1960s, you can just make it to a vast-level audio of 5.3. So which makes it really interesting, especially for the people who have never seen those videos and especially for visually impaired people, that okay, you can at least understand the sound of that. So that's making something interesting and that's currently a recent project going on. Okay, does anybody in the audience maybe wanna ask something related to this? Do you think that deepfakes might die out after some time? Because I remember back in 2010s, there was a big fuss about holograms being a thing and how artists that have died could still make concerts with holograms and things like that, but that never got anywhere. So do you think that this might also not go anywhere or is it here to stay? To try and answer that question, I'm gonna ask you a question. How many concerts have you been to with a holographic artist? Zero. How many have you heard of? I remember Tupac, God rest his soul. Tupac did one. Michael. Abba did one as well. Abba, Michael Jackson. Queen? Michael Jackson? Yep. I think that the hologram, I mean everyone wants the whole Star Trek deal, right? But technically, if we're talking about the technology, it's extremely expensive and not a lot of work has been done to commercialize it and make it more accessible to a wider audience. Deepfakes on the other hand, the technology required is largely soft, right? You don't need certain types of predictors or certain types of glass that allow for holograms to manifest themselves. I said that like they're ghosts, but you know what I mean. And so I think for that reason, among many others, Deepfakes are gonna be around and I think that more attention needs to turn towards the danger of them. And I know that everyone likes to talk about the danger is rather than the benefits. But let's talk about Facebook, for example. All the boomers on Facebook who use Facebook to inform themselves of their political opinions and then vote accordingly. I would argue that a lot of people who interact with fake news have a suspicion that it might not be quite truthful, but are still inclined to share it and to agree with it and to defend it because it's a way for them to further entrench themselves in their political microcosm in their political bubble. And things like that and looking at how technology like this has the power to drastically change democracy and whatever other political systems you subscribe to is a very, very looming threat. Those of you who are on TikTok and Instagram, you may have all seen those videos of Joe Biden, Donald Trump and Obama playing Minecraft, right? Are you guys know what I'm talking about? Where they've all got the headsets on and you can hear their voices and everything saying things they wouldn't normally say. Biden is unusually coherent. So it's, we look at, we laugh about it and we look at it from a recreational perspective but there's a very insidious layer to this technology and we only see, I'd say the tip of the iceberg. I'm sure that there are companies and projects going on unbeknownst to us that are extremely dangerous. So I think that would be a lot more important. But to answer your question, I don't think this is the fad. No, I think this is gonna stick around. Okay, great. With that, gonna move to the next one. The Future Life Institute has just called for a six month pause in the development of AI. This is to ensure the powerful AI is only developed once we are confident that its effects will be positive and its risks will be manageable. What risks do these fast developing tools such as chat GPT pose from an authenticity standpoint? First of all, this moratorium, that's a bit of silly thing. And actually it's very strange because I saw the list of people that supported that. And one of them is Mark Stackmark which is a kind of physics expert. And I heard an interview with him and then it appeared that what is not clear from this open letter is that he's actually doing that on behalf of the researchers in these big companies that are pressured by the companies to build bigger and bigger models without first studying what these models are doing. So actually I'm more on the line of Jan Lekund, one of the godfathers of deep learning who states what we should limit maybe is these products that they throw into society but not research. Actually we need research to improve that. That's not an answer to the question but it's something that I want to say. Just to sort of piggyback off of your point, pausing research in my opinion doesn't make sense. Don't stop learning. If you wanna limit what's going to be available to the public, then that's fine. But if you slow learning down and you stop learning down, then you're not doing your service to anybody. Yeah, I totally resonate with both of your points and just going around this authenticity standpoint. So, I mean, whatever information we get out of this GPD is like, okay, there is a certain extent to which what we already believe in. But the point that it doesn't explain or it doesn't give any information right how the whole information is coming out in terms of any suggestions. So if it just add on to the point that okay, this is one thing from where the source is of from where exactly the information is coming from, then the point towards related to authenticity could be explored more. But you never know of this. So for example, what I did with chat GPD was just trying to find out the beginner books related to our weather. There's no database out of that, right? What ever information that it gave was totally wrong. So if there's no information out of that, then in terms of authenticity standpoint, you already know that, oh, okay. The all the information that is coming in this one particular context is so outdated. And at the same time, think about so many relevant books and from the Madison perspective or also from the scriptures perspective from a global South. None of that information is available there in the databases of chat GPD three. I'm not talking about four. Of course, that's not available. So that somehow gives us a certain clue that okay, is it really authentic? Because if there's no database related to it, then how it can be authentic in those notions. Okay, thank you so much. Anybody in the audience? About this question of authenticity, I think the loss of an agreeable authentic reality is one of the things that we can all agree that deepfakes have the power to do. Specifically, how do you see that given that the power that AI manifests is located at such a small minority of people? How that could, how we could one define that loss of solid definition of authenticity as well as how we could mitigate such effects. I want someone else to answer this, but before that, I think it's important to remember that we're living in an age where data is the new oil and data is the most valuable commodity in the world. And similar to capital to finance, the 1% of people who are involved in charge of or supervisory or owners of these AI ventures and these tech ventures will have the loudest mouth and the biggest seat at the table. I don't inherently trust anyone who has a billion dollars. I don't inherently trust anyone who has built a company to be at a value of a billion dollars. Because in order to reach that amount of wealth, whether it be financial or whether it be data-based wealth, you have to step on people. And so when it comes to this future of life institute pausing, calling for a pause and blah, blah, blah. Someone has to fact check this for me, but I remember reading an article stating that Elon Musk, as soon as this happened, opened up an AI company in Nevada, registered a company in Nevada for AI development. So in my mind, Shell Company, keep going with the research, but distance it from your original project. Again, someone has to fact check this for me, but it sounds in character. And I think that keeping in mind that there is an information monopoly that is as insidious as a financial monopoly when it comes to these issues is extremely important to consider. And that the people who are in charge, the people who are in power, the people who hold these cards are going to use this technology in the same way they would use money to make the gap between them and the people below them much wider. So there is a malintent, I think, to a lot of the big fish in this pond. Thank you. That might be true. So I have no strong feelings about, I know they're in for the money, but what I think is more important to focus on what can we do against this? And fortunately, there are already initiatives, like Open Assistant is a system like JetGPT, not as powerful yet, but carefully curated data is fed into this model, although they rely partly on Lama from Meta, but there are initiatives where more open approaches to these systems are being developed. And one of the good things is that, although GPT-4 is not entirely open, the innovations that they used are minor. I think that's a few years and we have similar systems that are openly accessible. I think that's the root for Europe or the Netherlands to focus on. We somehow have to get rid of this dependency with these big tech companies for the reasons just explained. And we have to focus on a more solid approach. So one of the reasons that we have GPT-4 is the Wild West approach of the US and the people there, but actually in the long run, we need more reliable systems and that requires a different approach. And I think Europe and the Netherlands can contribute to that. So I'm a bit optimistic with that perspective. Okay, great. And now we have the last statement. Discrimination has been a problem in our society for hundreds of years. Even today, racism and discrimination against members of the LGBTQ plus community are extremely prevalent. What is the role of digital media in worsening or bettering the social issue? Now, the reason why we've added this is because I feel like you can add up a lot of different things that get together in terms of, oh, like propaganda, that's made just to push a certain agenda by like, for example, the Republic Party in the United States. And yeah, there are many examples I could give about this that do relate to authenticity and just non-authentic things posted on social media or any digital media. So yeah, that's kind of the basis of this. So I think that actually it's making our society better because for the first time, we can make explicit where these biases are. And one of the good things about GPT-4 is that it didn't, no, yeah, it was an analysis by Microsoft people. Of course, there's a commercial link, but at least they looked for biases in these systems. And it was surprising, well, actually not so surprised to see that the systems that are trained on all this data from the internet have huge biases with respect to gender, et cetera. So I think for the first time, we have systems that are as racist and biased as humans, but now at least we can pinpoint where it is and even intervene with it. So you can also reverse it. I think many of these discussions have a hidden assumption or implicit assumption that AI systems are autonomous, but they're actually very sophisticated statistical models. So they reflect our society, all the good and bad things of society. Actually on the foundation models that at the basis of GPT-4 and other systems are terrible. They are fascist, they are racist, and whatever. The contribution of open AI was to keep them in place by training it with 40 people. And I don't know who these people are and what they did, but they're trying to make the system more behaving. So I think ultimately that's another thing in Europe. We could focus on systems that are analyzed and where you remove these kinds of biases, which might be more complicated than we think because we have different cultures and different viewpoints. So I think, again, that's why I think technical people are not the only ones that should be involved. Also people have more understanding of cultures, politics, et cetera. I wanna preface this by saying I'm not a technologist, I'm a legal academic. And as a legal academic, whenever anyone asks me a question, my answer is always it depends. And it depends on a number of things. And I use this phrase to a point where students gifted me this plaque that just says it depends on it, which is in my office. But what is the role of digital media in worsening or bettering the social issue? I think this could go one of both ways. I certainly think that on social media, while you social media as a subset of digital media, it is much, much, much, much easier to find yourself in a microcosm. And there was a study done not too long ago where these academics wanted to see how long it would take for TikTok to lead you into the far right pipeline. So this is like the whole Ben Shapiro, Andrew Tate, Tucker Carlson, Candace Owens, again to use America as a yardstick. I really shouldn't be doing that. And it took 200 swipes on TikTok, 200 interactions on TikTok, and you were in red Republican territory. And so I think social media does this amazing thing where it allows you to find communities of people that you would never have previously had access to. For example, I have two very strange hobbies. One of them is Rubik's Cube Solving, and the other one is birdwatching. I don't know any birdwatchers, and I don't know any Rubik's Cubers in real life, but I have a community online of people where we share little secrets, and hey, I took a picture of this bird, look how cool it is, and all of that. But then equally, if you are a raging antisemite, living in say a Jewish neighborhood in Brooklyn, and you can't share your antisemitism with anybody around you, but you log online and you find a bunch of other antisemites and you say, hey, I'm home. So I think that in that respect, social media and digital media are both extremely good at reinforcing this discrimination, and are also very good at, in some respect, bleeding into each other. And I know people whose social and political opinions have changed wildly for the better, in my opinion, which is a little bit more to the left, through by virtue of social media, they'll be exposed to a newscast clip or something like that on TikTok and say, hey, I never thought about that before, and then they'll go down that rabbit hole and realize, oh, wow, I was wrong about a lot of things. So I think that, again, it depends very much on how things are used and that all my answers, questions like this end up boiling down into education. I don't know if there are any gender non-conforming people in the audience, I won't ask you to identify yourselves, but how many people sign up for things online and when gender pops up, it says male or female? And weirdly, male often tends to appear first, even though female is first alphabetically. So there's two things happening here. One is the assumption that men are gonna always come first. And the second is that, oh, anyone who is gonna subscribe to the service is either male or female and they don't take into account intersect people, trans people, non-binary people, et cetera. Education will, I think in the long term, prevent these sorts of things from happening or at least reduce the number of occurrences because if someone is well-informed about, say, LGBTQ plus issues or about cultural issues or religious issues or political issues, they will be designing systems in which these biases do not at least manifest themselves as overtly and as aggressively. So I think the answer to all of these questions is always education. Actually, I answered the wrong question because what you said is digital media and what I said was in response to AI. For digital media, this was already identified in 2009. This is a great danger, the dangers that you indicate. I think we should be very worried about that more than about the AI hype is what digital media is doing to our society. So I just want to say that. I totally resonate with those points as I mentioned you as well. And I think at the same point, when we're talking about worsening or getting it better, I link it from one of the personal stories. So in terms of the part which is better, so the part which's related to the chat which was introduced by Microsoft on Twitter, had got just fired off after 24 hours before even getting to know more popular. So that was like, okay, many people came in together. That was like, okay, it's somehow raising those issues of culture, it's raising the issues of sexism and other things on the top of that. But what happened behind was actually with the people. So I know a friend who actually worked on that project within Microsoft and Seattle. So his whole team of 15 people got fired off. But what happened with that? It's the technology and it's not the data that those people worked upon, they just work upon their gods, right? So when we're thinking about bettering or worsening, we also need to think at the level which is related to the human perspective. So there are plenty of people who are working on the side of, you know, which is behind that. And even after that, okay, so he jumped from Microsoft to Google. And what happened with Google of this responsible AI team that they had? So it had to cut short from 100 people to only 15 again. So it is happening at that level that the companies do not want to explicitly talk about that. But when it happens on social media, they're like, yeah, we are the first one to interact with that. And we do not want any buses, you know, going around there. So it's a two-way approach. But what happens on the ground level, it's sometimes just end up in the news reports and stories. So we also need to think about that. Okay, great. We are going a little over how much I wanted us to. So is there anybody who really wants to ask a question? Okay, that one person, and then we will wrap up. Okay, so first of all, thank you for the lectures. And because we have been talking about authenticity, right? So we're talking about like new AI technology and what its effect might be for us to like facing the problems of kind of to know what is true or not in the future. So what do you see of the future of fact-checking? So will there be any like new technologies developed in the future specifically to like fact-checking all the information? Haruna's crawling into a cave. There is somebody coming to the rescue, it's Elon Musk. Seriously, Elon Musk wants to build a company called Truth AI because he was one of the funders of OpenAI and he's very frustrated about the fact that's not so open anymore. But he's a bit naive, of course. But I think fact-checking of the future requires thinking about new institutions that we used to have respected institutions. When I was generating some questions or some answers on GBT, I found things that I thought this is not true. And I checked it on the internet using trusted sources. But you really need to know as the public what are the trusted sources? I think that's where the future is. Not truth, GBT. Don't trust it. And just to add upon those parts, so the fact-checking thing, how it is actually happening is I think the annotators, especially from the Southern Africa region and the people getting really less paid for that. So they are the one who was annotating in terms of, okay, is this a real fact or not? Based on certain information available from the official websites, right? And do we really need to have that bigger database of the people just to have the fact-checking approaches? Or I think the better solution is just to have the institutional reforms. So those are the key things that might be able to play a great role in this. I'd like to add very quickly, because I know we're out of time, that I'm not gonna do that whole Trump administration alternative facts. This fact is my opinion. I'm not gonna do that. But we also have to consider that the platforms that we use may be informed in some way by the political agenda of the people that run those platforms. And a very good example of this is the Israeli-Palestinian conflict and how Palestinian and pro-Palestinian sentiment is often smothered or shadow banned or in other ways restricted on platforms like Instagram and Twitter. And I'm very aware that this talk is being recorded and now this is a matter of public record and I'm gonna have to defend myself probably many years when I say this. But Bala Hadid, for example, who's a model or influencer of some degree, posts a lot of pro-Palestinian stuff and sometimes she gets ghosted or she gets shadow banned or her content is smothered. And this is due in large part to the way that a US-based company will see things that are pro-Palestin because the US has always aligned itself as being Israeli, a pro-Israeli and Zionist. And so things like this are going to inform and influence what is allowed to be perceived as truthful. And I think that's something we also have to keep in mind. I'm not saying that I could walk into this room tomorrow and say the sky is green, that's my opinion, it's also a fact, because it's a fact that it's my opinion. That's not what I mean. But the people who pull the strings and the people who are in charge of these massive companies who have these massive amounts of influence and also in so many ways deal in gross quantities of money will have a stake and will have a say in what they allow to be disseminated and what they allow to be spread. So I think that's something that has to be kept in mind as well. Okay, great. Thank you, everyone, for being here. And I would like to thank all three of you for being here as well. I think we really achieved what we wanted to do with this topic and that is to approach it from a lot of different angles. And I think all three of you have definitely contributed to that. So yes, everyone can clap.