 Welcome. My name is Nortje van Huizen. I am an assistant professor at the Department of Cognitive Science and Artificial Intelligence and today I will be moderating this symposium which is organized by Studium Generale and this time in collaboration with no less than three study associations. So SAPI from philosophy. Where are you guys? There. Enigma from Cognitive Science and Artificial Intelligence. Over there. And Elsa from law. Great. So welcome. We are here to talk about AI and let me first add something more for the students that are here. It may be interesting to know that you can receive a certificate if you go to five lectures from the Studium Generale and you write a report about that. So more information on the website there. Okay. So AI is a hot topic. It's been in the news a lot. There has been very many recent developments in the technology of AI of course with chat GPT going on and now even more recently with work that sort of suggests that we can actually maybe read minds with AI and today we want to sort of address the question with how well does this technology work? Is this something that we should really be afraid of or shouldn't be afraid of at all? Are we looking forward to this? Is this possible? Should we want this right? We're going to look at this from the from the philosophical perspective as well as from the legal perspective. What kind of rights and policies should we put in place to actually regulate all these kinds of developments? So we have invited three experts from their field to talk to us about sort of these different directions to think about and hopefully will allow us and give us a good setting to get into a discussion with each other and see where we stand. So let me briefly lead you through the schedule. So we will start with three lectures from the individual speakers that I will introduce to you shortly. Each lecture will be 20 minutes with five minutes of Q&A just for a short couple of questions. But after those lectures there will be a interactive panel discussion with all the three speakers. So we hope to really get sort of deeper into the topics and involve the audience and maybe get some different perspectives on all these developments at the end in the panel discussion session. So that is the plan for today. So I think right now is the time to introduce the first speaker to you. The first speaker is Harm Brouwer. You can come up. Harm Brouwer is an associate professor at the Department of Cognitive Science and Artificial Intelligence who is working on the neurocomputation of language looking at models of how language develops in the brain. And he received his PhD cum laude from the University of Groningen for which he also received the Clusco Prize for Outstanding Dissertation in Cognitive Science. And right now he is here and he is going to talk to us about the neuroscience of mind reading. So nice to see you. So can you all hear me? Good. So the neuroscience of mind reading. So you might see that I put mind reading in quotes because when I was asked to do this very first question that I asked myself is what actually is mind reading. So you do what you do, right? You go onto Google, you check out the dictionary and then I actually found this Apple dictionary to be the most sort of clear on this, right? So a mind reader according to the Apple dictionary is a person who can supposedly discern what another person is thinking, okay? So discern what another person is thinking. So the question that I want to sort of walk you through right today in my lecture is can we use neuroimaging and artificial intelligence, the combination of the two, to discern what another person is thinking, all right? In order to do so I first want to sort of introduce you to techniques from neuroimaging that we can use. Then I want to introduce a couple of techniques from artificial intelligence and then sort of walk you through how together these techniques give us at least some progress on mind reading, all right? So let's start with neuroimaging. So the neurons in our brain, right? Our brain cells, they communicate to each other using electrochemical signaling and as it turns out we can pick up part of the electrical part of that signaling using electroencephalography or EEG as you might know. And what we do right is we put these electrodes on the scalp and we basically measure electrical activity. So we get these ongoing voltage fluctuations over time which tell us something about when things happen in the brain, right? So and very loosely, right? We could say they could inform us about when thoughts happen. They do however do very poorly in informing us about where things happen in the brain, all right? Another technique that you might have heard about is functional magnetic resonance imaging or fMRI, right? So what we do here is we put people in this enormous, really expensive scanner, right? And what we measure there is the fact that neurons after they have been active they need to replenish on energy, right? So like all cells in the body they need glucose and oxygen. Glucose and oxygen is basically delivered to these brain regions, these active brain regions through the circulatory system, right? So through the blood. And we can measure that using this fMRI scanner. So we can basically measure the ratio of oxygenated, oxygenated relative to deoxygenated blood. And what we get, right, is these cool maps, right, of brain activity that tell us where in the brain activity is going on. So they can basically inform us about where things happen in the brain or where thoughts happen. The signal, right, the circulatory signal or this blood flow signal is however notoriously slow. So it tells us really little about when things happen, right? But still, we've got these two techniques. These two techniques are definitely sort of the most prominent in the field. And the question then, right, is given these neuroimaging techniques, can we use methods from artificial intelligence, right, to tell us what thoughts happen, right? So what the person or can we discern what the person is thinking from the when and the where, right? So in the when and the where are these measurements that we get from the GRFMRI. So assume that this is my brain. And this is my brain being active. This is my brain activity, right? And on the right here are my thoughts. So what can AI do to basically help us doing this mind reading? And in a way, the most straightforward instantiation of mind reading is what is called decoding. It's taking the brain signals to somehow represent my thoughts and turning these brain signals into thoughts, which we can write down. So basically reading out my thoughts from the brain. So people try to do this in AI, right? There are methods that can sort of, that we can use to attempt to do this. Turns out this is hard. And it's not just hard, it's like really hard, right? So another way we could go about it, right? So we've got brain signals, we've got thoughts. Say that we had an idea, right, about what a person is thinking, right? Could we try and predict their brain signals, right? So turn it around. So can we go from thoughts to brain signals? Now at first, right? So this is a technique that in some way I use in my own work as well. This might not seem to solve a whole lot of problems, right? Because this seems we need to predict what people are thinking. We need to predict brain activity. But it turns out that there's one other sort of revolution in AI going on nowadays, right? That can actually help us make this encoding technique quite feasible. And if you pay attention, you can already see this one coming, right? It's namely large language models, right? Or more specifically, the generative pre-trained transformer. So GPT, chat GPT, GPT4. I'm not going to give you an introduction to chat GPT. Eric Posma has given an excellent introduction to chat GPT for Studium General. And I really advise you to just watch his video if you're interested. But the core idea, right, of what these language models are really good at doing, is predicting how sequences unfold, right? Or sentences unfold. So here we've got some context saying mind reading is, right? And then language model can give us possible completions of this sentence, right? So mind reading isn't possible. Could say that this is a 40% probability. Mind reading is possible, 25% probability, and so on and so on, right? Now it turns out exactly this combination of encoding, so going from thoughts to brain signals or from text to brain signals in combination with this sort of language modeling, right? A predictive language modeling is what underlies this paper that came out in Nature Neuroscience this year. So called for semantic reconstruction of continuous language from non-invasive brain recordings. And this paper is part of the reason that we're all here, right? It has basically cost quite the uproar, right? So here we go. This is the New York Times, right? AI is getting better at mind reading. Nature, mind reading machines are here. Is it time to worry? The Guardian, artificial intelligence makes non-invasive mind reading possible by turning thoughts into text. The Volksbrand in Dutch, gedachte lees in de hersenscanner, het kan, laat nu onderzoek zien. And finally, our southern neighbors are least impressed of all. So the standard goes like, hersenscans verkloppen een beetje wat we didn't, right? So brain recordings only sort of reveal a little bit of what you're thinking. But you'll be the judge of that yourself, right? So what I want to do is take this paper and see how they combine GPT and brain encoding, right? The encoding techniques to do this mind reading, right? To discern what another person is thinking. And I want to walk you through all the steps and a machinery involved. So what they did in this paper, so led by the group by Alex Hutt, is they started off basically from fMRI recordings. So this functional magnetic resonance imaging, right? So this blood-based signal of three subjects that each listen to 16 hours of naturally dispoking narrative stories from podcasts. So we had the moth. Is anyone familiar with that? Yeah. And modern love, the podcast. So these are podcasts where people narrate stories. And so what they have then, right, is both a massive amount of speech data, namely those podcasts, right? And brain recordings, right? And so the trick now is, can we go from brain recordings effectively to those speech signals, or at least to the language, right, that's encoded into the speech signals. And so a second step they take, right? So this is just the data. We've got speech signals, we've got brain recordings. So the second step they take is to basically convert these speech signals into representations of meaning, all right? And how this goes is they take these speech signals, they transcribe them, right? So that we've got written words, right? So I said, you know, what are you doing? He's like, oh, I'm, you know, working here painting during the summer, right? So written words that sort of convey what is in the speech signal. And they then use a GPT again, right, to turn every word into basically a numeric representation of meaning. Okay, so the word painting, for instance, gets a couple of numbers associated with it. The word working gets a number of numbers associated with it. And the way to think about these sort of numeric representations is that they encode meaning in some large space, right? So for instance, in this space, the words painting and working, right, might be closer together than the word summer, right? So this is a proxy for the meaning of these individual words. Okay, so then we've got those speech signals transformed right into word by word, basically sequences, and we've got these word by word sequences and we transform them to the representations of meaning. So what they then do is take these meaning representations associated with every word and try to predict the brain activity that's related to that. So they go from the speech signal, they do this feature extraction that we just did, right? And we end up with these feature representations, these numeric representations, right, for every individual word in the sequence. And they then use them on the other side, right? We've got the bold responses or the brain signals that are associated with processing these speech signals, right, or processing these words. And they then basically construct an encoding model that takes us from these word meaning representations to brain signals, right? Now, this is where the AI kicks in, right? This is AI, right? So AI is able to get us from these feature based representations or word meaning representations onto these brain signals. And it turns out that the AI that is doing this is not some kind of black box or mystery machine, it's actually something that we've all learned in high school. It's namely nothing more than brain signal equals AX plus B, right? Basically the features, right, those word meaning representations, those numbers go into a linear regression model that you might all be familiar with, right? And we predict activity in these given brain regions. So we predict these bold responses. Okay, so then we've got a model, right? We've got sequences of words. We now can sort of encode the meaning, right? We know how to sort of represent the meaning of those words. And we've got a model that can take us from these word meanings onto brain signals, right? So the final step then is to go from brain signals to thoughts, right? So say that we have got some measurement, right? So we've got some brain activity pattern that we measure in the scanner at say some time step t plus one, right? So some intermediate time step where we're decoding and we obtain this bold activity pattern, right? So these basically these little blobs that we showed up, right? Those red blobs that showed up in the animation that I showed you before. And what we're interesting in is basically discerning the meaning, right? What is it that this person is currently thinking about? Now say that thus far, right? So prior to encountering this brain image, right? The decoder was sort of maintaining two candidates, right? Either it was sort of entertaining the idea that what we were decoding was the sentence, I saw a dog, right? Or it was I saw a big, okay? So what they then do is use GPT and propose continuations for these sentences, right? So I saw a dog with, I saw a dog and I saw a big dog and I saw a big truck, okay? Now we can actually use our encoding model, right? And turn that into predictions about what the brain activity would look like, right? So we get different candidate brain activities that we can compare to the actual observed brain activity, right? And using some likelihood model, right? Just saying, hey, what's the most likely brain activity pattern, given the actual observed brain activity pattern, right? We can draw inferences about which of the sequences is best. And based on that inferencing, right, we conclude that I saw a dog with or I saw a big dog, right? Are the most likely candidates, okay? And then the sequence basically of this cycle repeats itself again. So it's basically driven by this encoding model, okay? So that's how it works. So how well does it do, right? That's what we're all here for. And I see Nathan, Nathan already commented on this during Norge's introduction. So I know Nathan's opinion here. So what they did then, right, is they've got these trained models and then they had these three subjects again listening to stories that were not used during building of the encoding model, so novel story. And we can now basically try and decode those stories, right, from the brain activity that we recorded from that and see how well the model actually does in decoding the actual stimuli. So here's an example that I'm going to walk you through. I got up from the air mattress and pressed my face against the glass of the bedroom window expecting to see eyes staring back at me but instead finding only darkness, okay? From the brain activity that we observed, this is what the model decoded. I just continued to walk up to the window and open the glass. I stood on my toes and peered out. I didn't see anything and I looked up. Again, I saw nothing, okay? And then they go and they say, look, okay, we've got certain words exactly correct, like window and glass. The purple bits are the gist, right? So there's sort of the gist of the semantics is correct. And the red words are straight up errors. And we've got a couple of those examples, right? So that's the one more. I didn't know whether to scream or cry or run away. Instead, I said, leave me alone. I don't need your help. Adam disappeared and cleaned up alone crying. So the model says, start to scream and cry. And then she just said, I told you to leave me alone. You can't hurt me. I'm sorry. And then he stormed off. I thought he had left. I started to cry. So that's what that example is actually not that bad. And actually quite impressive given that they just extracted that right from those brain recordings. Another way to look at this is that they looked at basically the decoded stimuli relative to the actually stimuli through a couple of like metrics, right? That tell us how well we're doing. So what we see here in those white boxes is chance level. And you'll see that for all subjects, right? So these are basically word based similarity metrics and bird is a semantics based meaning based similarity metric. And we see that we perform above chance for all subjects, right? So we're doing pretty okay. Okay, what about imagining stories, right? So not just listening to stories, but imagining stories. So they did another experiment where they asked the three subjects again to imagine telling five segments of modern love of one minute each, right? And what you find here is that basically, you can do a sort of cross comparison, right? Of the correct story to the incorrect stories. And you'll find that the decoded images are more similar to the correct stories for all of those three subjects again. And here's a couple of examples. So look for a message from my wife saying that she had changed her mind that she was coming back to see her for some reason. I thought maybe she would come to me and say she misses me. I don't think that one is really good, but but still it's better than the other the other excerpts. This one I actually like a lot more. So here they had people watching silent movies, right? From the Pixar catalog and Blender, I think. So here you have an actual stimulus. So there's no sound here. So this is what is being shown to them, right? I think they were even being asked to not sort of verbalize it. And then the decoders said she was very weak. I held her neck to get her breathing under control. Pretty good, I would say. Okay. So can you resist this, right? So this is basically a question for the other two later on this question, but just they looked at this as well. So they did a resistance test where they basically again had the fMRI recordings of three subjects listening to one of four 80-second segments and they queued them with a specific task. So there was either no task. They were asked to count by sevens or they were named to imagine name and imagine animals in their heads or tell different stories in their heads. All right. So what we get then is that the performance is best when there is no task and especially the naming and imagining animals significantly reduces the performance all across the board. So it seems to be that you need to, right? It seems to suggest that you need to cooperate being decoded, all right? Or you need to cooperate having your mind read if you want. Then they did, they looked at whether, all right, so it's heavily driven by this encoding model, whether an encoding model trained on me would work on Nathan, right? And the way they did it was they had fMRI recordings of seven subjects each listening to five hours of stories and then they used different ways of aligning the fMRI data or treating the fMRI data, but the general idea is that they train on six subjects and then they test on another one, right? So to see if training on those six subjects can actually decode the brain of another person. And this is crystal clear, right? So doing it within subjects, so training and testing on the same subject outperforms the cross-subject decoding all across the board, right? So it seems to be person specific. So how well does it work, okay? So some observations. So here, thus far, I've just been talking about their work. This is where my sort of thoughts kick in. So first of all, the encoding models are subject specific and they require cooperation, all right? And in a way, right, it's decoding language, but it's only doing that through encoding, okay? So it's not going directly from the brain signals to the to the thoughts. It's using clever engineering to get there, I would say. And this is because we use this fMRI and in fMRI, at least the way they set it up, they get brain scans every two seconds and we measure the blood oxygen level dependent, which is basically the degree of oxygenated relative to deoxygenated blood. That's basically being a fat right to brain regions that just have been active. And the signal actually rises and falls over about 10 seconds in response to neural activity. So it's slow, all right? And given it's spoken English actually on falls at two to three words per second, right? This means that each image, each brain image is affected by about 20 plus words, right? So we've basically got an ill post in first problem, right? Because decoding continuous language, right? It's basically, we have the problem that there's many more words than there's brain images to the go to words from. All right? So the way they solve this, right, is by continuously predicting the most likely word sequences and then comparing that to the actual observed brain imaging, right? But this means that the decoding model actually reduces to an encoding model. And his encoding model is subject specific. Plus, right, they require large amounts of fMRI data. And so they use the three Tesla scanner. The general rule of thumb is that one Tesla is one million, right? I don't have that at home and we don't have it here, right? The hospital as one, I believe. And it's not portable, not easily accessible, right? So that's not going to scale up. However, let's do some philosophy, right? Or some future thinking. So what if I add one of these, right? Let me put this on, right? So this is an emotive epoch, right? This is a portable EEG system. This one is still, it's relatively small. It's still a bit clunky, I think. It also doesn't sit really well. So, but say that we get this machinery, right? So portable EEG hardware, right? So portable EEG scanners that we can use to basically measure our electrical activity. And humans be humans, right? Say that we're wearing this and we're doing our social media, right? And we're allowing obviously our scanners like we allow everything nowadays like our watches, right? To connect to these apps, right? So we're feeding the data to the big tech, right? What's going to happen then, right? So rather than having really expensive FMRI participants, right? We've got brain recordings coming in all the time because at our convenience we can order a pizza with our brains, right? And we're feeding this to the most reliable companies in the world, right? So I just leave it there to think about. So I know I'm going to have to answer this question. I'm going to put this up because it's horrible. Can AI read your mind? So I agree with Nathan that right now I would say no, right? So it's very clever engineering. It's very well done. They deserve to be a nature neuroscience. It's extremely well crafted, but it's clever engineering, right? And the results are, I think the video results were impressive, but you often saw that it sort of got the gist. But then again, as a first step, right? It's going to be improved, okay? And then sort of consider the scenario that I just sketched, right? We've got these portable EEG hardware. We've got our phones connected to that hardware. And we've got our willingness to share this with these big tech companies, right? That basically gather data for free. And taking into account that four years ago, if you had asked any top computational linguists whether chat GPT capabilities were real, they would have told you you were crazy. So given that and the pace at which this field operates, I'm inclined to say no, but also not yet. Thank you. Thank you. Thank you very much. We may have time for one small question. As I said, we will have an interactive panel discussion later where we can, but maybe there's a small question, maybe from the small guys there in the back. Which one? There was a hand spray. Can I play Zelda with my brain? Now, that's the most important question. And I have now had so long to think about the answer. So my answer is we're going to bring this home. I'm going to ask Faiza if I'm allowed and we're going to start working on it. So let me translate. We're taking this home and then we're going to fix it. Okay. There was one hand over here. It's a really technical question, but I was just wondering, would the data gathered through the EEG, through emotives, new technology be equally useful for like this technology trying to read our minds as the fMRI that they used for their process? So this is a really good question. And I think it's a really empirical question in the sense that I don't know of any data set that actually collected EEG data for these podcasts, right? So somebody should just try that set. What I, like I've worked with EEG data myself, it's extremely noisy. So just to give you an example, like if you have a anomalous sentence, it's like he spread his warm bread with socks versus he spread his warm bread with butter. Just trying to predict whether that person heard a sensible sentence or not. We get about 58% accuracy out of that. But then again, that's not combined with this encoding model and stuff. So trying to apply this to EEG is just an empirical question. And I think it should be tried. Power of data. Well, it should be tried if you want to know, let's put it like this. Thank you. I think we'll move on to the next speaker. So thank you again, harm. So next up is Nathan Wildman. So Nathan teaches logic, philosophy of language and digital aesthetics at the Department of Philosophy. He is also a member of the Tilburg Center for Logic, Ethics and Philosophy of Science. He received his PhD from the University of Cambridge and is a very welcome and well-seen speaker here at the Studium Generale. He was also nominated by our Tilburg School for Humanities and Digital Science for the Teacher of the Year award. So I'm very happy to introduce him here and he will talk to us about, are your slides up yet? So first thing, can everybody hear me okay? Microphone working? Okay. So yeah, you guys can all figure out the title just from reading my mind, right? I'm thinking it really, really hard. No. So basically, my title is what's going on in there, some thoughts on the philosophy of mind reading. Maybe the first thing to say is I'm going to do the best I can. I'm not feeling terribly well, but we'll push through. The second thing is it's really nice to be here. It's lovely to see you guys. So what I want to do is talk a little bit about mind reading. We already heard a little bit about it. Introduce it both as an everyday concept in some sense of everyday concept and then as a kind of technical notion, not just in this sense, but more generally, then think about the kinds of technology that Harn was just talking through and whether this settles some questions or debates in philosophy of mind. The answer is going to be no, right? And then think about some general ethical concerns. Talk through those. And I want to finish by talking about somewhere in between an ethical and a philosophy of science issue. So that's the kind of plan, the crash course. Hopefully we'll get through it in the reasonable amount of time. I've obviously never given this talk before, so who knows? We'll see. So first thing, mind reading. You are all excellent mind readers. We do this all the damn time, right? This is the kind of thing that we do when I look at my wife and she gives me the look. You know, the look. The look is, I have done something terrible and she is going to murder me when I get home, right? So mind reading in this very basic sense is literally just inferring another's thoughts based off of typically intentional, though it need not necessarily be intentional, externalized signs and behavior. In the really weakest sense, you can even see just understanding linguistic expression as a very, very weird kind of mind reading. But actually, what we're more interested in is when you do things like reading off of some very subtle gestures, what people are really thinking. But notably, with this kind of stuff, individuals in their thoughts are still inaccessible. So what I'm thinking right now hopefully isn't coming through. Right? We'll leave it at that. And also, this is from Honest to God, one of my favorite cartoons ever, Harvey Birdman, Attorney at Law. If you haven't seen it, go watch it. So that's everyday mind reading. But we're more interested in a special kind of mind reading. And the idea here is that brain computer interfaces and this sort of neuro encoding and decoding might give us a new kind of ability, might give us more than just everyday mind reading in that it might indicate when certain cerebral activities going on and tell us how it correlates with specific mental states. And the upshot there is that it can maybe make accessible stuff that was inaccessible before. So that kind of inner thought stuff will now suddenly, those of us who are outside your head can look into it in some sort of way. And I should say this is basically, I think, well, certainly much better than chance. That it's giving us correlations between certain mental states and certain neuro states. The interesting question is really, is that still giving us something like access to the stuff that we couldn't access before? So the thought is using this tech, we can gain access to another's thoughts, intentional states, emotional states, perceptual states. Interesting one might be memories. I think this might come up in some stuff we're going to talk about next. And really interestingly, this could theoretically be done with or maybe without permission. And it's obviously the without permission that a lot of the ethical stuff is going to come in. And note, the thing we're really leaning on here is that first clause, in theory. In practice, it's really only going to work with permission. People have to be actively willing participants. Just think about how long they have to sit in the fMRI machine in order to get everything to work. But interestingly, this tech, setting aside this for a moment, it has the potential to offer new forms of communication, new forms of self expression, and potentially to support lovely, lovely mutual understanding. And of course, it's got a massive potential for invasions of privacy, new forms of oppression. And depending on the type of brain computer interface, maybe even a kind of mind control. That is, if the BCI allows for input and output, then we have some fun stuff that might happen. So I think these are really where the kind of philosophical money is. Is thinking about how might this do us dirty? And I guess it's notable to think about what we've actually been able to do so far. So Harn talked about this really nicely. But actually, there's been quite a few studies here that things are progressing pretty well. I mean, we can use this tech to track things like motor plans, visual imagery, facial perceptions, speech, especially if it's actually vocalized, or you're doing a sort of sub vocalization kind of thing. There are some interesting thoughts about decisions and intentions. This goes in with plans, I think. Some stuff about mental imagery. Actually, note how old that is already, right? 2008 forever ago. And we can have perceived and imagined speech. So I think this is what the big upshot of the recent study was. And the thought here is that there's some sense that mental content, which all of this is a kind of gesture at the mental content, can be kind of read off in some way, shape, or form the information the tech is giving us, the brain measurements that we're getting. And the first thing I guess I want to say about that is it's really narrow in scope, even the success cases. And how far we want to take the success is one thing. But even if we grant it's been successful in some cases, it's successful with three people. Three people that each one of them had to spend 16 hours in an fMRI machine. And one of the reasons why we're invoking large language models to help us out here is because before that what we had to do was really narrow the range of options people could talk about. The reason they could do this sort of mental imagery thing is because the sorts of things that we were allowing people to picture in their heads, there were like five options. It was very, very, very, very restricted. So even if it works at the moment, it's very, very small in the range. And I guess one reason to have a bit of fun here is there's no reason at least prima facie to think it's going to be scalable. In fact, there might be good reasons to think it's not going to be given how diverse are various neurological makeups are. But I want to kind of put that aside. Let's just spot that it works. And I think it broadly does. Let's spot that it works and think about, cool, what does this tell us? At least for the philosophy aspects. So one thing you might think is, hey, maybe this stuff can help us settle age old philosophy of mind debates. So there's the familiar kind of mind body debate. Everyone familiar with this kind of thing? It's badly named. It really is probably more like mind brain debates, but spot that. First thing to note, you might think, oh, cool, this data solves the problem. It does not. The information, the results we have perfectly compatible with pretty much any position you want to take here. You could be a physicalist. You could be a dualist. You could be an anomalous monist to throw out a view that no one buys anymore. All you have to buy into is that there's some correlation between brain states and mental states. And pretty much anybody can buy that. And the same thing goes for thinking about an idea that, hey, maybe mental states or mental content could be or could be reducible to neural states. Again, all you have to buy, all the evidence is giving us, is that there's a nice correlation between types of mental states and types of neurological states. That doesn't settle any issues here. It's compatible with a range of options. So again, that's just something kind of cleaning house a bit. And there's maybe even one reason to think that there is some kind of distinction here between something like mental states and neural states. And that's roughly the kind of, we'll call them perspective one has on them. Mental states are subjective and they have a kind of first person character. Contrast that neurological states are kind of objective, right? And they're distinctly third person. I have, you show me my brain scan, my neurological state at a time, cool. That's not going to necessarily correlate with what was experiencing when I was going through the same mental state. So again, this is just quick and dirty. I think the big money is when we get to the ethical issues. So when we think about the ethical issues with this technology, first thing that happens is a lot of people immediately jump to like hypothetical future applications of it, rather than what does it actually do right now? And I think just between friends, frankly right now, it's not terribly worrisome, at least as it currently stands. You've got to be so invested and so involved in it that I don't really think it's raising anything terribly novel ethically beyond certain other technological problems. But of course, we're more interested in what happens in the future, what happens if it starts to proliferate or it's actually effective. And here I think there's a lot of fun stuff that happens. So one idea that maybe you might be worried about with these kinds of things is that there's something like a right to mental privacy. And the thought here is that, haha, your thoughts are your own. And only you should have access to them unless you choose to put them out in the world. And that's I think kind of plausible, especially if you also buy something like a right to cognitive liberty, which I think I might be talking about next, right? So I won't talk about it very long. But the thought roughly with cognitive liberty is you want to be free from something like brain manipulation to think your own thoughts. These two go very nicely together. And the possibility is future hypothetical technology that lets us just ease drop in on people's thoughts. Well, you can obviously see how it starts to undermine the first one, but it might also undermine the second one too. And the reason to think it undermines the second one is basically that because you're worried that someone might be ease dropping on your thoughts, you feel as rainy puts it, unable to think your own thoughts. So there's a really nice analogy here, right? If you're a keen diarist and you become aware that your diary might be read by somebody else, you change what you write down because you don't put everything that you were going to put before because, you know, your little brother might come in and read it. So put more bluntly, reflective practices might suffer due to lack of mental privacy. You might not think the things that you feel comfortable thinking if you think maybe somebody might be ease dropping on what's going on inside. So that's one worry. A second worry, and this I think maybe applies even in the current tech, right, but definitely in the future one, is that whatever content you might get out is going to be largely decontextualized. What I mean by that is you might be able to look in the head and think like, oh, hey, Nathan's thinking P, or P is some proposition. And you know that I'm doing that in some sense, but you don't necessarily know why. You don't know whether I'm reluctantly entertaining it. I'm enthusiastically entertaining it. I'm supposing for some kind of reductio. The tech isn't going to give you the kind of context of the thought here. Now maybe you could use it over time and help to kind of unpack that, but at least the basic thought here isn't going to necessarily help. And the thought is kind of linking back with the earlier first versus third person thought that the objective information we're going to get might be in some sense misrepresentative of what's going on. Okay. So again, I think the first one, the second one, that's very plausible to me. Third one was actually kind of mentioned already, right? Both of them here, right, is cool. Let's imagine that this sort of tech, I'm not going to pick it up because I don't want to break it. This would break the philosophy budget right here. This sort of tech becomes prolific and we start to get a bunch of big data about neuro profiles of people. Well, this sounds an awful lot like the kind of thing we had with Cambridge Analytica, but on steroids. And it's worth noting that this sort of micro targeting is probably not actually that effective, but if we had neuro profiles micro targeting, I'm willing to bet that that's going to work, right? Because they'll be picking at not just your sort of inclinations, but literally the way everything's firing. That should be a great big worry. So I think those are interesting ones. The last kind of general one I want to raise, we'll talk about the very last one, but this last general one I want to raise is a kind of funny one that might happen if we use this mind reading technology. And it's a question about, we might think of epistemic authority, basically who to believe. So suppose that you're reading my mind using this tech and I say P, but if you look in your little mind reader thing, it says I'm thinking not P or thinking something that entails not P. Who do you believe? Do you believe me or do you believe what the machine's telling you? No, I think that's genuinely worrisome because there's no reason to really trust me. Don't I will sell you up the river as soon as I can, right? But I also don't think there's any reason to trust the machine here because again, and this kind of ties back to the earlier decontextualization point, it might be that I'm thinking about not P because I'm saying, oh, I definitely don't want to do that. And I genuinely don't know how you would settle this. I have no idea even how we would begin to tackle that question. And that strikes me as a big worry here. This sort of epistemic authority is a genuine worry. Now one other little problem actually has to do with sort of neuroprostesis. So this isn't quite about what we're talking about, but it's very closely related. And the thought would be that if we could integrate some of these sort of brain computer interfaces into us to allow us to have other abilities or maybe replace abilities that we've lost for whatever reason. So an easy one here, for various weird reasons, I've gradually lost my ability to my sense of direction, getting worse and worse and worse over time. So one thing you might do is I might go and get a navigator little thing shoved into my brain, right? And the navigator could kind of replace my lost sense of direction. And one thing you might worry about in this sort of circumstance is where are the thoughts coming from? If I walk out of the building and I'm like, hey, the train's this way, or the train's this way, which one's my thought and which one's the navigator kind of bumping me? Now it's a slightly silly question, but I think this is the kind of thing we should be worried about again if we buy into these BCIs that allow for output and input. Integration worries where thoughts might be coming in and we're not necessarily sure who's they are. That's the kind of thing I think should worry of some. Again, super sci-fi. I don't think it's something to worry about too much, but once we're in this realm, cool, this is something to at least have in the back percolating away. Okay, so the last thing I want to mention, the kind of last big worry is a great big worry. And that's basically a problem about hype. So I don't have a super nice big of all of the article headlines I've had. Here's some more from very recent ones, right? Just generally on this idea of AI mind reading, AI mind reading, AI mind, oh God, it's going to read our minds, oh God, it's going to read our minds. Now I actually think a lot of these are pretty sensible, but just think of the topic du jour and how crazy a lot of the scientific headlines go. I think this is a huge problem and I think this is a huge problem, not just for this, but generally. Researchers are really under pressure to upsell their innovations. They can be doing really, really good things, but it's not enough to just do really good things. You have to do something that's earth-shattering, that's amazing, radically going to change and innovate everything ever. And as well as researchers being pressured, science journalists are also feeling a similar kind of pressure because they got to get clicks, they got to get views. So they write the big, shocking headlines because that's what's going to get people to click through and read it. And I think this is a genuine problem and the problem is impacting the quality of research. It's certainly impacting the quality of scientific communication and it's an incentive problem, it's a funding structure problem. Incentives are designed to prompt groundbreaking novel research rather than work-a-day incremental science, but work-a-day incremental science is how science is actually supposed to be done. And that's bad. Overhype is bad. So I think we should have that as another little worry on top of these particular worries about how this AI mind-reading might work. And again, I want to stress this is certainly not unique to the stuff we're talking about here, but I think this sort of topic lends itself very quickly to this kind of overhype. Okay, so to wrap up, I think the tech can do super duper, duper cool things. I'm just really not sure about scale and I'm not sure about all of the results being the things that we're being told they are. Second, very small one, this is more just sort of internal bits. I don't think the tech and the results settle many matters in philosophy of mind. That's fine. That's just not what it's after. I think future applications, the sort of future nebulous hypothetical how it's going to work thing, raise a bunch of major ethical issues. So kind of limitations of thought because of privacy worries, issues about decontextualizing, neuro-profiling and selling off your mind plans, this worry about epistemic authority and possibly worries about integration. And then again, there's a lurking worry that funding structures lead to overhype that we need to be careful about. And the big question I guess I want to leave you with is something that I think if we want to do this, we need to have an answer to, which is why do we want this neuro technology? What's the point of it? What is it intended to do? Exactly. What do we want it for? And what do we not want it for? And unless we have a good answer to this question, we should not be doing this. It's cool to do the research, but I think we should definitely worry about how we're going forward unless we have a clear answer to this why thing. Not so sure. So that's me done. Thank you guys very much. Thank you. Thank you so much, Nathan. That was very entertaining. Is there a short question at this point? Yeah, there are the guy in red. Hi, Dr. Nathan. I really miss the logic classes from first year. They were interesting, but it's great that you mentioned about mind control because I sometimes this thought always lingers in my mind that are we already living in a world where we are being controlled? I wouldn't say exactly before, I mean, because of AI, but in the world where we are using constantly social media, like food delivery platforms and every data, every single detail offer, let's say on Thursday, two o'clock, I get, I get hungry always. So, you know, I get a notification, hey, you ordered pizza last week, so you might be hungry. So that actually is not controlling my mind, I would say, but it's a way with directing my mind to go in that direction. So like the modern, this advancement, like the strategies of marketing and these way, wouldn't you say that they are already a foundation of mind reading? Mind control, I would say. Yeah. Okay. So cool. I think, I think that's broadly right. I mean, there's worries. So this is back to the top point. I think there's a lot of worries about stuff that's people are familiar with like nudging. Nudging is a huge issue here, right? Very simple. They put candy by the cash register because you're more likely to grab it, right? Instead of putting like fruit next to cash, because no one's going to buy fruit, right? You're going to buy candy bar, right? So there's a lot of manipulation there. I think what I was really trying to get at here was the forms of manipulation and the strength of it seems so much more worrisome than this, this just sort of gentle kind of nudging thing. So I think there's something too, the worry you have, but if we're running with this crazy hypothetical here, it's going to get even worse. Yeah. Thank you. Thank the speaker again. So we have one more speaker, which is Shors Lichthardt. So please come up. Welcome. So Shors is an assistant professor of criminal law here at Tilburg University and also associated with the University of Utrecht. He is a graduate from our very own Tilburg University here where he received his master's degree in criminal law, as well as his PhD, Kom Laude. So he specializes in the law and ethics of neuro technology in criminal justice. And today he will talk about neuro technology and mental liberty towards novel human rights for the mind. Question mark. Thank you. Can you all hear me? Yeah. Okay. Excellent. Yeah. Thanks. Thanks for the invitation. And indeed, I will provide the final perspective, which will be a legal perspective on the emerging neuro technologies that promise to read or alter our minds. And I will do so based on currently ongoing debate on whether we need new human rights to protect our minds and brains against emerging neuro technologies. And what I will do first, so I will briefly introduce how emerging neuro technologies may raise questions regarding human rights. And then I will zoom in, zoom in on some current proposals to recognize new human rights that should specifically perfect our minds and brains against neuro technology. And then we briefly consider whether recognizing new human rights for the mind would indeed be the right answer to tackle the challenges of emerging neuro technologies. So first question. Why considering human rights and neuro technology and harm already touched upon this question, you can see because the answer to this question lies in developments in neuro technology. Because technologies like these promise somehow to say something about what people think, feel or desire being based on brain activity, and may also be able to manipulate certain brain processes in order to change certain mental states and ultimately behavior. And some examples harm already touched upon it is fMRI EEG and deep brain stimulation and noninvasive transcranial magnetic stimulation. And these technologies are currently being used in day to day research and medical practice, for example to diagnose epilepsy or brain disorders or perhaps to treat Parkinson's disease and depression. And that's all good so far. So good. No significant or new questions about human rights. However, these technologies can also be used for other purposes beyond the traditional controlled and well informed domain of medicine. For example, in a recent book, Nita Farahani gives a nice and quite stunning overview of private companies and public institutions who are investing in neuro technologies. Not so much to be used in the medical domain, but rather to be used for instance in the military or to be sold on the consumer market. And an example of such a private company is Elon Musk's Neuralink and their aim with Neuralink is to establish a direct link between our thoughts, our brains, by using this brain chip. And their idea is so currently we are controlling our mobile devices and computers with by using our fingers. And in the future, Neuralink aims to let us control our daily devices like our mobile phones, solar light with our brains with our thoughts which are connected to our mobile devices and computers. And their aim is also to bring this technology from the lab into people's homes so everyone can use it every day. And this might sound like science fiction, but they have already been able to let this monkey play this pong game only with his brain, only with his thoughts that were connected to the computer through the brain chip which they call the Neuralink. And recently in June, there was a news item that they got approval from the FDA to start research with these brain chips into humans. I'm not sure whether they actually got approval, but according to their websites, people in the U.S. can now register themselves to participate in human trials for this research. And whereas the Neuralink is still really in a research setting and requires a surgical intervention to place the chip into the brain, other companies already touched upon like Emotive promise to connect your mind to the digital world and turn science fiction into reality by non-invasive forms of brain reading. Here it is, like portable EEG. Okay, when these technologies further develop and when they will actually be able to get all kind of information from our brain, from our mental states, that might allow to draw inferences about mental health and possibly in the future also about our thoughts, desires and emotions, then obviously some questions arise. Then the question arises, for example, what will Neuralink, what will Emotive do with all the data that they obtain? Who will get access to those data and to whom will the data about our mental health, about our thoughts, dreams, emotions, to whom will that data be solved? In other words, what will happen to our mental privacy? So that's one specific concern in the legal debate about human rights and their technology. How can we protect our rights to mental privacy? And apart from developments in consumer technology, there's also a debate on the potential use of neuro-technology in criminal justice. For example, what if invasive and non-invasive types of brain stimulation will be able to reduce sexual drive in sex offenders or will be able to reduce aggressiveness in forensic patients? Should the criminal justice system make use of these kinds of brain interventions in order to reduce recidivism and to promote rehabilitation? And what will be the implications, the legal implications of using this kind of brain interventions in criminal justice for the well-established right to bodily integrity, which protects more or less the idea of my body, my choice? And what will be their implications, for example, by using brain interventions to change a person's perception about aggressiveness or to change a person's sexual drive or preferences? What will the implications of those technologies in criminal justice be for the far less developed and less well-known right to mental integrity and the right to freedom of thought, which protects more or less the idea of my mind, my thoughts, my choice? So what I'm trying to illustrate with these two examples from consumer technology and the potential use of brain interventions in criminal justice is that neuro-technology is slowly exceeding the domain of medicine. It can be used potentially in other domains as well, like in criminal justice, like in the military, et cetera, et cetera. And thereby it is raising new questions about our human rights. At least these developments give rise, have urged lawyers and ethicists, as well as neuroscientists and the media to publish increasingly about the human rights implications of these kinds of emerging neuro-technology. For example, according to this news item, Facebook is building technology that can reach your mind, but the ethical implications are staggering. Likewise, brain reading technology is coming and the law is not ready to protect us. And why is the law not ready to protect us? So the argument goes, that's because when the law was made in this context, when human rights were made, were established in 1950, no one could have envisaged the possibilities technology offers today, let alone tomorrow, to read or manipulate our minds. And in order to make the law future-proof in this regard, different scholars are now arguing to develop new human rights that should offer specific protection to our brain and to our minds against the pervasive effects of neuro-technologies. For example, in their seminal article, Marcelo Jenga and Roberto Andorno argue for the recognition of four specific human rights and these are a right to mental integrity, a right to mental privacy, a right to cognitive liberty and a right to psychological continuity, which is more or less a right to personal identity. And likewise, there is something that's called the Neuro Rights Foundation, which is based in the United States. And they're also actively promoting the recognition, the development of new human rights for the mind, which they call neuro rights. And this Neuro Rights Foundation was established and is led by a neuroscientist, Raphael Juste, and their aim is to engage the United Nations, regional organizations, national governments, et cetera, et cetera, to raise awareness about the human rights and ethical implications of neuro technology. And in that regard, they are actively lobbying to introduce these four neuro rights, a right to mental privacy, a right to personal identity, a right to free will, whatever that may be, a right to fair access to mental argumentation, and a right to protect from bias. And what they are doing at the Neuro Rights Foundation, they are begging their claims for new human rights, but these kind of non peer-reviewed reports where they argue that currently there are gaps in human rights law and that human rights law, human rights treaties are currently unprepared and unable to protect us against the challenges, against the threats raised by neuro technology. And they argue, for example, nevertheless, rapid advances in neuro technology are no longer science fiction, they are science, and it is urgent that the UN play a leading role globally to embrace these exciting innovations while protecting human rights and ensuring the ethical development of neuro technology. And although these kinds of reports and the arguments by the Neuro Rights Foundation and to a lesser extent the arguments by Yankee and Andorno have been challenged, especially by legal scholars, the idea of recognizing new human rights in relation to the mind is now getting attention by higher institutions such as the Council of Europe and UNESCO. Both are at the moment quite actively working on the question how neuro technology could affect our understanding of human rights and how human rights should protect the integrity and privacy and liberty of our mental states against neuro technology. And so does the Human Rights Council of the United Nations, which recently adopted a resolution, official resolution on human rights and technology. And in that regard, they emphasize that neuro technology allows the connecting of the human brain directly to digital networks through devices and procedures that may be used, among other things, to access, monitor, and manipulate the neural system of the person. And in that regard, they have requested an advisory committee to prepare a study, to prepare a study on the impact opportunities and challenges of neuro technology with regards to the promotion and protection of all human rights. And this study is currently being prepared. The advisory committee is writing their report. But in an earlier document, they already stated that it is true that specific standards may be needed to ensure perfection against interference and misuse of certain mental aspects by neuro technology, such as cognitive liberty, cognitive freedom, cognitive liberty, mental privacy, mental integrity, and psychological continuity. And it may not be a coincidence that those are exactly the four rights that have been argued for by Jenka and Adorno. However, whether we indeed need new human rights to protect adequately, to offer adequate protection against emerging neuro technologies is being challenged increasingly in the literature, especially by legal scholars. For example, legal scholar Christoph Bublitz has argued that the proposals for new human rights tend to promote rights inflation and neuro-exceptionalism. And he argues that rather than conjuring up novel human rights, existing human rights should be further developed in view of the changing technologies and a view of the our changing conceptions in, for example, philosophy and bioethics. Likewise, human rights lawyer Susie Allegra, she rejects the idea of recognizing new human rights for the mind. And instead, she argues for strengthening one of the most fundamental human rights that is guaranteed in almost every human right treaty around the world, which is the right to freedom of thought. And likewise, I made an argument in this book on coercive brain-reading and criminal justice, that the generic rights to privacy and the right to freedom of expression are perfectly able to offer adequate protection to the personal interest of mental privacy. So we don't need to conjure up a novel human right to mental privacy next to the existing right to privacy and the right to freedom of expression. And recently, the three of us have argued again this in a near nature correspondence, where we again emphasize that we don't really need to come up with new human rights because most of the human rights are already out there. However, we have to specify those human rights in order to actually offer perfection against new technology in individual cases. And for that, for example, we may need to change our civil, our ideas of civil responsibility or criminal responsibility, for example, for misuse of neuro technology that could read and manipulate people's minds. So in that sense, we argued, we may need new laws, especially new domestic laws, but not so much new human rights. And why don't we need new human rights? Because if we have a look at current human rights, there is already a lot out there that can protect us against emerging neuro technologies. We have a right to privacy, which covers a right to mental privacy. We have a right to freedom of thought, which protects among other things against having your thoughts being manipulated. We have a right to freedom of opinion and expression, including a right not to express what you think or what you feel. We have a right to bodily, but also a right to mental integrity. And we have a right to personal integrity, personal identity. And what is important that a central principle of human rights law is that we should interpret these kinds of human rights in view of present-day conditions. Human rights are to be considered as a living instrument. And these rights are not static, but instead, they are dynamic. For example, as the Council of Europe writes in this document, what gives the Convention its strength and makes it extremely modern is the way the Court interprets it, dynamically, in light of present-day conditions. So by its case law, the Court has extended the rights set out in the Convention, such that its provisions apply today to situations that were totally infecible and unimaginable at the time it was first adopted, like mind reading and mind control, including issues related to new technologies, be that neuro technologies, bioethics, and the environment. And through this approach, the Human Rights Court of Human Rights has been able to apply and specify the existing traditional human rights to technologies that are developing. For example, to GPS tracking systems, video surveillance, DNA data banks, and now this approach will equally enable the Court to apply traditional human rights, like the right to privacy, like the right to identity, bodily integrity, to the specific challenges posed by neuro technology, I think. How exactly we should interpret these rights and how exactly we should specify them to the particularities of neuro technology? That's still an open question. And I think that to answer that question, we really need to collaborate with all the disciplines involved and really try to speak and understand each other's language in this regard. So including moral philosophy, neuroscience, psychiatry, and the law and work together on the idea of how can we interpret and specify human rights in response to the challenges of neuro technology. And that's something that we try to do in this paper. But I will leave it with this. Thank you. Thank you. Thank you so much for this very interesting talk. Are there already some questions for sure? Here in front, yeah. Just one question. Given the extreme sensitivity of, for example, mind reading or the science behind this, well, the exploration of mental states and that sort of thing, could it be imaginable that it were treated more than a right as a prohibition, as for example, the prohibition of torture, which is a peremptory norm, and it taking the shape of something like that rather than a right as we have seen that rights can at times be overlooked in cases of national or global security, for example. Yeah, that's a very important question. It touches upon the nature of different rights. So you refer to the right to the prohibition of torture, which is traditionally be considered as an absolute right. So interferences can never be justified, unlike, for example, the rights to privacy, which you can just interfere and then you can justify the interference, for example, by referring to national security. So it depends on how important we think that it is to protect our mental states that can be read out, for example, by neuro technology. And if you ever look at these range of rights, so for example, the right to privacy is a qualified right. So you can make interferences with a right that could be justified. But if you ever look at the right to freedom of thought, that's also an absolute right, just like the prohibition of ill treatment. So if we think that it's really important to protect some parts of our minds in an absolute way, then that would be a reason to further develop the right to freedom of thought and specify it towards these kinds of technologies, because then the right to freedom of thought will protect at least some kind of absolute protection against, for example, reading out or manipulating thoughts. Does that answer your question? Thank you so much. I think we will now have a quick stage change and invite all the speakers then back up to the stage so that we can have a public discussion with all of you. So take a couple of minutes. Please remain seated when possible and maybe think of some nice provocative questions to ask our speakers and to get some debate going. And then I think we'll put some chairs up here. Okay, I think we are ready almost. Everyone found a place. So I think what I would like to do is actually start by inviting some of our study associations to open the discussion and ask some questions to our speakers. In the far back, we'll start. So do we have to bring up a microphone? Yeah, you guys can share one. Okay, so Shurs talked about speaking each other's language, but I was wondering if that's even possible because the different disciplines have such different interests. I think the artificial sciences want to innovate more and earn more money when psychology and moral philosophy and law are concerned with very different things. So I was wondering if it's even possible to speak each other's language. Yeah, thank you very much. That's a really good question. I think indeed it will be hard, especially in the beginning. But I think it's not impossible. Also because in some way we are all connected with each other through this theme. So people are inventing things and are working really hard on that. And then other people are trying to create human rights that should prohibit their technologies. And I think so that that will affect everyone within the field. And to come up with a solution, I think everyone there is something in it for everyone to try to regulate this in a responsible but also in a not over inclusive way. And I think if you look at the literature, our paper is just is just one example. But also if you look at what is going on at the United Nations, the Council of Europe UNESCO, what they are doing, they are quite actively organizing around the tables. And they are inviting people from neuroscience, people from law, people from ethics and so on. And you see that by doing that, there is really a debate that's beginning to start. And if you explain more what you mean, for example, if ethicists talk about rights, they talk about moral rights. If lawyers talk about rights, we talk about legal rights. Those are two different things. But we should explain it and make it explicit. And I think then it's not impossible to better understand each other. So any of you have an additional comment on that? Just a kind of extension of that is one thing that makes it slightly difficult is actually things like to come back to incentive structures, incentive structures for departments and things like this, right? I'm interested in publishing philosophy papers and high quality philosophy journals. It'd be cool to publish in neuroscience journals, but that's not always necessarily going to really help me do a bunch of the things that I want to do. And that's bad. That disincentivizes the kind of collaboration that I think we're after here. Exactly how to tackle that is way beyond my pay grade. But that's something that I think is influencing and impacting things there. And it can be worked around, but it's harder. Yeah, here in the front, there's another question. So about what we were saying before, and especially I like the example with the bread with butter and like spread with butter or spread with socks. So I would wonder how would that work with the model they built, because I guess that what the GPT is basically doing is taking a bunch of very noisy data and reducing it to the most probable outcome. So I think that it might not work when we are actually trying to decode our very quirky and weird thoughts. And this was just my observation to begin with, but if we actually managed to, so not yet, but in a near future, decode our thoughts in a more accurate way, what would that, what kind of understanding will we gain from that about our mind? Like learning about the what is happening. Does it really convey something about the how as well? So yeah, I would like to ask this. So yeah, thanks. That's a great question. So first of all, the he spread his warm bread with socks versus butter idea, right, was basically to illustrate how difficult it is to do the direct decoding, right. And so in this case, it's always also difficult with the GPT approach, right, because he spread his warm bread with socks is likely to have a very low probability. But this is the so this is first of all, this is the kind of language like research that I do. And that's where the example came from with decoding just to illustrate how difficult it is, right. As a matter of fact, just add a little bit. What I have been doing successfully is encoding, right. So we we show people a bunch of sentences in which we manipulate certain things like how plausible the sentences in light of our day. And using those those ratings, right, that we don't obtain like the plausibility of those sentences, we can actually pretty accurately reconstruct the the e g data the other way around. It's horrendously bad. Okay. So that's that. So there are questions of what can we actually learn from it, right. So what the the Tang and all paper also did was actually subdivide the brain into different regions. So they had the classical language network at a frontal network and at the association network. And they tried to look at decoding performance in those different different networks. Now, I actually found that the most interesting part of the entire paper, but the results were not that interesting, given that all the brain regions seem to do sort of quite okay. I can imagine, however, that putting on my neurobiology of language head, right, that there is an interesting endeavor in looking at how different representations. So right now we use word embeddings, but we have more complicated ideas, right, about what the brain is encoding and why. And we could actually contrast those right. So so does the encoding of does using one type of representation lead to better encoding performance than another. And this might actually tell us something about the the human brain. And one recent example that you guys might have heard of is the another brick in the wall, the coding, right. So this was in the news a while back, where using intracranial recording. So these are recordings directly from the cortex, right, where people were up for surgery. They managed to reconstruct a part of another brick brick in wall part one. So the audio of that that song. What that gives you is very specific ideas about how music is encoded, right, when we perceive it. And I think there is an interesting endeavor there. And I definitely, to my field, like the neurobiology of language, flashing out those different representations and testing those different ideas and ideally even going towards different anatomical regions and stuff, right, I think we can learn a lot. That's a good question again. So I think you should first of all. So so I think right now, right, we see that the encoding models are pretty subject specific, right. And so you should start thinking about first of all trying to generalize across that, right. And I guess what you should try and do right is use those representations and try the best to predict the activity pattern across different subjects, right. And how well that is going to work are how difficult that is. That's an empirical question. Here there's another question in the front. Finally, you're up. Firstly, thank you to all three gentlemen for a very exciting and informative presentations. I have two things. First thing, I think to pick up where you just left in terms of representation in these studies on potential future studies, would they perhaps take into consideration, what can I say, different minds like perhaps people who have OCD, Tourette syndrome, intrusive thoughts and things of that nature. And then beyond that, have they also considered potential biases like we're seeing with artificial intelligence, different cultural nuances and different ways people think and expressions and how this could potentially I guess sway things. But I think it ultimately goes back to I think one of the questions that was on your slide at the end that what is the intention? Why are we doing this? Is there a problem or what are we trying to fix? Are we trying to fix a healthcare issue where we're dealing with people with disabilities? Is it a criminal justice issue or are we just nosy when we want to know what people are thinking? So yeah, I just wanted to pick your brain on those points. So I'm going to give it a first shot and I'm going to pass to you. All right. So the thing is that those are important questions, right? But given that this works with neurotypical people right now, right? So people that don't have any deficits, that don't have any disorders, that they're aware of, that they're aware of. That's what we hope when they screen those people, right? So but indeed that we're aware of. Even in typical language research, right? We're still struggling to understand how a neurotypical person, right? And given that we think those are neurotypical populations, how they understand language, right? So yeah, those are things to consider. And if you move more fine-grained, right? So my first step from a scientific way, we just heard that science should be incremental. So one step at a time, right? I would try and flesh out how those representations map onto, say, a subset of the normal population, right? But yeah, going forward, right? Those are definitely important questions to address like those nuances and things like that. Do you want to add? Yeah, maybe just kind of an analogy. So something that I work on, I do a bunch of philosophy of fiction. And philosophy of fiction is intended to talk about cinema and theater and literature and everything, understanding the notion of fictionality. And the way they started was let's talk about just novels. And that was a place to start because, oh, we can do this. And then once we have the grip on this easy case, turns out they can't even do the easy case. And I think that's kind of the thought here, right? Is we start with kind of neurotypical people, hopefully, right? Well, not hopefully, but we start with neurotypical people because that is going to be the softball case. But even that turns out to be just insanely difficult. So yeah. And then onto your second question about the why kind of thing. I think one thing that kind of runs in here is that there's a lot of different motivations for engaging in this sort of interest. So one big reason, one selling point you see a lot is, oh, this is the kind of tool that will help someone with like locked in syndrome. And if that's what you're after, then the kinds of activities and tests and stuff that you're going to do are, yeah, narrow, right? If you're interested in, as you put it, if we're just being nosy, then go nuts, right? So I guess I don't think there's a single answer there, which is fine. But I think that one of the questions we should be asking is this sort of like, what's the motivation here? Why are we actually doing this? What do we want to get out of it? There's a lot. I saw that one first. Thank you very much. It was really insightful. So I have a question. Is language relevant to decoding, coding thought? Because I guess the training is based mostly on data in English. So this could lead to an impact on accuracy and even bias. So, okay. It's a good question. So now I'm going to put on a really scientific hat, right? We're going to have to sort of represent bots one way or another, such that we can compare that to what we intended to decode. I guess you could use, if you were decoding visual imagery, you could compare that to visual imagery. So in that sense, language is not special in that sense, right? So I do think in this particular case, right, it's, I guess, easier in a sense that from an engineering perspective, right, we now have these tools like JetGPT. And JetGPT has these word embeddings that we can use and that we can basically feed, right? So I often feel that with a lot of these things, it's because this is the best thing we can do right now, right? So, and definitely you can explore different ways going forward. Does it answer your question? So I guess we have to, we do have to thank you to account, right? The fact that English is just one language and one way to express thoughts. And there are maybe concepts that may be expressible in one language and not the other. Definitely. So that's a good question as well. So if you then focus on the language and focus on the multilingual thing or the different languages, right? Then yes, I can imagine that you would have to build different encoder models. Well, you have to build different encoder models per subject anyway, right now. But you would have to build different encoder models for different languages. And those different languages, right, might have different nuances that you pick up on. So yeah. Yeah. Over here. So I'm going to propose a completely hypothetical scenario. Like, I don't know if it's realistic at all or not. But there's a minority report. I'm not sure if you've heard of it. It's basically this universe where they preemptively stop crime by reading people's thoughts and things like that. And, you know, just arresting them before they can do anything, which seems like a horrible thing to think about, you know. So I just wanted to get all of your thoughts on that. Yeah. So that's only movie to start with. And indeed, it's quite terrifying at the same time. So I think two things. So first, if you look at criminal justice, how it traditionally used to be and how it is now, it's already shifting perspective. So it used to be very retributivism-like. So you do something wrong. Therefore, you deserve punishment, basic desert. And then you can go back into society again. But it's already shifting towards what has been called preventive justice. You did something, or we think you did something, and therefore you are being dangerous. There is a risk that you do something again in the future. And that can already be legitimate ground to take off some freedoms and put you in something like jail. So from that perspective, in some way, we are already doing more or less the thing that you refer to. The only difference still is that kind of preventive justice system is still mainly based on the idea that you have done something. So you committed a crime and that crime might not be very severe, but we are afraid that in the future, you will commit a very severe crime and harm someone. And then we can preventively put you in prison. But it's still about an act. And your concern is about, if I understand correctly, what if you have not acted wrongfully, but only thought wrongfully. Can we punish your thoughts? And that relates to the question earlier on. And to my answer, that question. So currently, we have a right to freedom of thought. It's a very firm right. It's being guaranteed in almost every human rights treaty. And it prohibits in absolute terms three things. So the first is discovering people's thoughts without consent. The second is changing people's, manipulating people's thoughts without consent. And the third relevant to your question is punishing people because of their thoughts. So I think currently at the moment, although there is some kind of paradigm shift from retribution to preventive justice, one of the rights, right to freedom of thought, will be able to protect us at least in the upcoming years, I think. Thank you. Is that a very short question? Yeah, I would like to point out, I think, to sort of unspoken assumptions that come to mind when you think of thoughts. That is, first of all, that all thoughts are conscious. And second, that this kind of mind reading technology would be used between people. I would like to sort of introduce to the discussion the hypothesis of using this technology to gain a deeper access to one's own mechanism in a future psychology that is rooted in science and that can be sort of a psychology on steroids, where through an AI model, one can change one's own behaviors without giving access to that information to anyone else. Sounds philosophical to me. I mean, I think so importantly, I think the way I set it up was broadly dealing with conscious thoughts, but that wasn't intended to be exclusive. That was just because it's easier. If you go with the kind of idea you had, I think that worry I had about epistemic authority becomes way worse, because now you're not even going to be sure about what your own thoughts are in any good way. The machines feeding back on you and telling you, oh, hey, this is your subconscious thoughts. And anyone who's ever had a Freudian analysis can like, okay, interesting. I'm not really sure I buy all of it. So I think that that wouldn't make that epistemic authority worry even jacked up. But I think importantly, the general presupposition that the thoughts were conscious was something I suspect in every one of the cases, it was just a heuristic to make things easier. But definitely not something you have to buy it. I unfortunately have to add one more thing. One sentence. Well, that's not going to be one sentence. Just to get into the unconscious thing. So it's already been quite a long time known, right, that like, if you ask two groups of people, so you have, say, progressive people, and you have like conservative people, and you feed them like sentences like normal language experiments like Sally didn't want to be pregnant. So she got an abortion that we can basically tell which group you belong to from your brain recordings alone. Right. So that information is present. And I'm sorry, I have to close the discussion here. And we can talk and let's let's indeed not stop the discussion here. Right. Let's sort of continue to come together from these different fields and show indeed that it's possible to have a common language and to talk about these issues from these different perspectives. I think what what we sort of mostly learned today is that there the AI technology is already very far. It's not there yet. We don't have to be afraid, right. We don't have to be scared that that that AI is suddenly going to start reading our minds. But we have to already maybe at this point think about a lot of considerations, right, ethical considerations, maybe instantiating not legal right, but legal policies for for dealing with this kind of technology. And I hope that well with this diverse group internationally interdisciplinary group of people, I think it is was very nice to to be together. And I would like to thank you all for listening and asking questions. And of course, our speakers. Thank you.