 Welcome everyone. It's really good to see you again here back on campus. It's finally possible again, so that's really nice. My name is Hannah van den Bosch and I'm the new program maker for Studium Generale. And as Studium Generale, we organize all kinds of activities on campus, but also workshops, lectures, cultural activities. And this lecture today as well, which is part of the all you need to know about lecture series. And during the all you need to know about lecture series, we delve into a specific scientific topic. And yeah, we we talk about it in a concise and also manageable way so that it's applicable preferably to many. And this lecture today will be 45 minutes long and there will also be 15 minutes Q&A time. And it's also good to know that this lecture is part of the Studium Generale certificate. So it counts towards it. So if you want to know more about this certificate, you can look it up on our website. And today's lecture is also in cooperation with Studium Organization Flow and SAPI. And now, of course, the topic of big data today. I think, yeah, many of you have already heard about big data. You hear a lot about it in the news, of course. But what is it exactly? And also maybe more importantly, how does it influence us and how does it influence our society? I'm really pleased to welcome Jan Sleutels today because he knows a lot more about big data. And he will maybe also be able to answer questions like these. Jan Sleutels is from Leiden University. And he's a professor of media philosophy, philosophy of mind and metaphysics. And in his research, he also looks into communication technologies and how they influence us. So for example, big data, but also the internet or newspapers and also how they influence and organize our mental processes. So please give a big applause to Jan Sleutels. Thank you very much, Hannah. Checking the sound. Amplification is working, yes. It's wonderful to be here at Tilburg University. It must have been 20 years since I last was here. So that's a long time ago. And it's almost been two years since I saw a room full of people prepared to listen to me for an hour because usually it was only the cashier at the local supermarket or tiny students in portraits online. I hope the corona measures will be made less strict next week. And so do you, probably. Does it work? Not yet. Jan Sleutels, this is what we're going to cover. Starting with an introduction and a disclaimer. There's no slide for the disclaimer. The disclaimer is this. The overall title of the series is all you need to know about. You're not going to hear all you need to know about big data. I'm sorry about that because it's simply too much that you would need to know. But if you only get to know a tiny bit, that's okay for me. Second disclaimer, more important, I'm a philosopher. Do not expect answers. I'll be giving you questions much more than answers. So after this warning, let's start. My frame of reference is that of what I call the Enlightenment Model of Rational Agency. The model that defines, maybe that's too strong a way to put it, our self-image as rational beings. Maybe it's not defining. Maybe the Enlightenment philosophers have, let's say, articulated what it is to be a thinking, willing, feeling, knowing, et cetera, human being in a way that's most familiar to us. I'll give you some examples, some details about that. It's an ideal view of what a human being is or what a human being should be. It's very close to what in philosophy of mind these days is often called folk psychology. So not scientific psychology, but the way we think about ourselves, we see ourselves, the self-image that we have as thinking beings. But this Enlightenment Model differs from folk psychology in at least two respects. First, it's much more normative. It's about what human beings should be like, not what they're actually like, what their psychology is. And secondly, it also includes a set of conditions that need to be in place in society for human beings to be able to function in the most optimal way. So on the one hand, there is what I call a cognitive and moral profile for individuals, the sorts of qualities and competencies that you as a human being need to possess. And secondly, in blue, bottom right, it's the conditions that are typically the characteristics of a liberal democracy, the sort of society that we live in here in the Netherlands, for instance. It's okay for you if you have competencies for processing information, but that wouldn't be much used to you if you don't have access to information in society. So these two, the individual conditions and the social conditions, they're equally important. I put in Jürgen Habermas here, not because he invented anything like this theory, but because in the 1960s, Habermas gave the, well, the most explicit description of what it is for a rational being, a communicative rational being to function optimally in the public sphere. Good reading, still is. Now, that self-image is actually a complex network of all sorts of qualities that individuals need to possess that need to be in place in society. So you need to possess information skills. You need to be able to provide reasons to discursively articulate your behavior, to explain to others what you are doing. And you need to be willing to do that. You need to be willing to share information. You need to have a sense of responsibility, those sorts of things. They are actually typically the sort of qualities that you find in the job advertisements. And there are the sort of qualities that employers are looking for. I should have included this book as well, brought it with me instead, Nathan King, not a well-known philosopher, but this is a beautiful little book published last year, The Excellent Mind, Intellectual Virtues for Everyday Life. If I just think from the table of contents, some things you'll recognize them. You need to possess curiosity. That's one chapter. Carefulness, autonomy, humility, self-confidence, honesty, perseverance, etc. So that's the idea that we have of what we and our fellow members of society should be like. Notice that this is an ideal. It's not a description of actual psychology. If you want to know more about actual psychology, so let's say rational beings functioning in a suboptimal way, there are some literature pointers, bottom right. Bert Gigerrenser, rationality for mortals. The title gives it away. Kahneman, especially Kahneman-Slovich and Tversky. But Kahneman, you may know of the best seller thinking fast and slow, good rates. There are more about actual psychology. I'll be talking about the normative ideal here. Now, how does that relate to digital technologies? I'll be focusing on the internet. But actually, the phrase the internet is short, at least in this lecture, for all sorts of uses of digital technologies, especially communication technologies. But the internet, let's be honest, is by far the most impactful technology in the digital world that we have today. Many of you, I'm not sure how many. No, many of you will have heard about early internet, but that was before you were born, for most of you. Some of you may have experienced early internet. I have. I'm 60 plus by now. So in the early 1990s, I'm a fan of computers and the internet, that was really something. In these first two decades of the internet, say, until roughly 2015, people like me used the internet just like any other tool. We had in real life all sorts of purposes and aims that we wanted to achieve, and we used tools for achieving these, like reading books or using telephones or using calculators or using computers. They were still often called word processors in these days. And the internet was just one more tool. We individual internet users remained fully in charge of deciding which information to access, for which purposes to access that, which services to use, et cetera, et cetera, et cetera. And that's totally in line with what I just described as the enlightenment model. We autonomous internet users had our own purposes. We made our own plans for life, and we used any tool that was available, including the internet. That's not to say that the early internet did not have its critics. I'll just be mentioning two of them without dwelling on these. One is Harvard Law Professor Lawrence Lassig, a book published in 1999, Code and Other Laws of Cyberspace. Lassig pointed out that we may think of ourselves as autonomous rational agents. That's actually the white dot or the black dot in the middle that you see on the right-hand side. But all sorts of regulations and strictures are pressing on us, and the dot is actually shrinking because it's being regulated by, obviously, by laws, but also by market conventions, by social norms, and things like that. And in the 1990s, Lassig pointed out an extra thing was added to that. That's the architecture, and that's the laws of cyberspace, the architecture of the internet. And he was afraid, he predicted that the architecture of the internet would actually be compromising or autonomy. Something similar two years later was explained by, or claimed by American philosopher Ebert Dreyfus in a beautiful little book. It's thick as a pink, something like that. Still a good Dreyfus argued that the internet, life online, telepresence, teleconferencing, also remote education, online education, the sort that we have been having over the past two years, will actually be compromising the authenticity of human activity. Because, hey, it's not real. It's not real agency that you see online. You click a button, for instance, to make a friend with someone on Facebook. Okay, that's a hackneyed example, but you get the idea. But is that really a friend? It's just as easy to de-friend or to unfriend by clicking that very same button. That's not real. That's not authentic. There is no real commitment in there. What has happened since early internet? Well, big data happened. Actually, it's not just big data. It's a lot of things. Many of you will be familiar with the phrase Moore's law. So the idea that every two years, the number of logical circuits that fit on the square millimeter will double. It's an exponential growth. These are all processors arranged on the horizontal axis, timewise, chronologically, vertically. And that's, be careful, that's a logarithmic scale there. It's a number of instructions that can be processed per second. My first computer had, that was a MOS 6502 or 12, I'm not sure. That was that. Could do, well, less than 5,000 instructions per second. Early internet, that's roughly here. Just for instance, maybe you recognize the names, the Intel and the Pansium and the Pansium Pro processor. What, how good were they? Well, let's say something like 5,000 instructions, sorry, 5 million instructions per second. But look where we are now today, 50 billion. That's gigantic, so fast, so powerful, have computers become. What are all these processors doing? Well, they're processing data. We've seen an explosion of data as well. Early internet, well, that's actually to the far left on this graph. And today, 2020, actually 2022, it's again way up. It's logarithmic as well on the vertical axis there. The number of data that are being processed is impressive. Awesome. Where do, where do these data come from? Is there anyone of you who doesn't have a smartphone or a laptop or a tablet that you use on a daily basis, maybe even on an hourly basis, a minute-like basis, 10-second basis, hopefully not now? And that's only the portables that you can carry with you, like the laptops and the telephones. Do you have service cards for supermarkets and things like that? Do you have chip cards for public transport? All these create the data. And that is really tremendous. And all these data are being processed by these ever more powerful computers. And this is what happened last year in an internet minute, and it will be more awesome even when in December the new graph will be put up. You all know this sort of graph, I think. It's about the size of the data that are being processed and the power of the computers that are processing them. This is basically, let's say, a social thing, data or that machines are being integrated in our lives. It's a processing thing. Computers are becoming powerful. But it's also an AI thing. AI has become much more powerful. And it has changed from what it was in, let's say, back in the 1950s, when artificial intelligence was invented at Dartmouth College at the famous conference. Those were the days that people tried to make expert systems. So you program in your knowledge and you specify the rules that you need to follow to achieve a certain result. Transparent algorithms. That is what AI was then. Today it's totally different. Today we have machine learning. There was a temptation to include a short video on machine learning. Actually, this is a web lecture that I made. I'm a philosopher. I made it together with Maagden Lamers, who is an expert computer scientist. This is a lecture that you may want to watch at home. So feel free to take a picture of the QR code. It's not required, obviously. Summarizing what we are telling there is that machine learning is of such a nature that we do not know what the machines learn. And machines processing data find patterns in these data. Basically, what they do is they're making associations and basing predictions on these associations. And that's what happens when you, for instance, on bold.com or amazon.com, get recommendations for what to read or what to buy, or that all of a sudden in your email you find, let's say, an advertisement for a new pair of sneakers or something like that. Do you think why is that? Well, it is because groups of people happens to be looking at sneakers. You weren't. But we're also looking at the books that you were interested in. And the prediction is that, well, you can put together there. That's what the data are doing. We human beings are not aware, do not know, have no insight in what the machines are doing. They're black boxes. I'll get back to that in a moment. And I think I have a strong feeling that this development may actually be compromising this enlightenment ideal that I started out with. So it used to be the case early internet that the internet was just a tool for me, autonomous enlightenment model sort of human being to use to achieve my own purposes. That's what used to be the case. But this situation today looks like the whole thing topsy-turvy upside down. Internet services are no longer functioning as mirror tools for people. They're actually also making decisions for us. For instance, what to eat, what to watch, what to buy, things like that. And we trust them. We do not even care about that quite often. That's scary. And what's even scarier, that's the second point, bottom right, we feed algorithms with the big data that we create as we engage with all sorts of digital machines. And we do that on a daily basis. These data are being processed by internet services and fed back to us. Quite often, that is to serve us better. That's the official idea. You get recommendations for what to read, for instance. Why is that? Because, well, we think you might be interested in that. And otherwise, you might miss out on something. Okay, that's serving me better. But there's also other purposes involved. Purposes of, not of me, or of any of the other internet users, no. Could be purposes of commercial parties. Could also, as in some well-known scandals, think of the Cambridge Analytica scandal, could also be for partisan or political purposes. It could be for all sorts of purposes, without us being aware of. That means that we internet users are in part no longer using tools, but actually we are now, the tools, if you think about it in a very straight way, we are now the tools for others to be using. Kant, Emmanuel Kant, the famous Enlightenment philosopher, would be turning in his grave using people. People are supposed to be and send themselves, but not means to an end. But I'm not going to pursue that particular route here. I want to focus very briefly on two examples. So two aspects of this Enlightenment model. And I chose knowledge and agency to key concepts that I think are in danger. Well, danger is not the right word because that would suggest that the development is actually bad. And that's not the way that I feel about this. But what seems to be happening is that these concepts, the original meaning of these concepts of knowledge and agency, as we understand them from, in terms of the Enlightenment model, that they're no longer functional. They're becoming dysfunctional. And at some point, we come at the crossroads. So we need to decide what to do. Do we want to change society, the use of machine learning and big data, to, let's say, to comply with the old fashioned traditional model again? Or do we need to revise our view of what a human being is, what the concept of knowledge entails, what the concept of agency entails, to comply with reality? So that's the sort of crossroads that I will be coming at when discussing these very briefly, these two concepts. Let's start with knowledge. Back to one Enlightenment philosopher, David Hume, still a brilliant read. Inquiry concerning human understanding, actually inquiry on morals as well. Beautiful work. The way that Hume explained what a human being is, or should be, is actually, when it comes to knowledge, we are rational beliefs managers. So we have all sorts of beliefs and desires, and we need to juggle them. We need to balance them in some way. We need to find out what to believe and what not to believe. And Hume tried to find the laws of human understanding about how that works. Hume wants us to produce reliable beliefs on the basis of the available evidence. That's still very much how we think of our belief production these days. This is just a silly example, obviously, should I quit my job? I don't want you to consider that, but I don't want you to follow this flow chart just to get the idea. You know which sorts of considerations, which trains of thought would cross your mind when you're considering this question? What to decide? There's all sorts of aspects and considerations that need to be taken into account, et cetera. It is as if your mind, I'll get back to that in a moment, by the way, as if your mind works like a computer, following a sort of algorithm, a stepwise plan, a flow chart of how to manage these beliefs. Recognize this? I hope you do. I bet you do. How does knowledge fit in here? Well, the tricky thing is, as a philosopher, I'm well aware of the fact that knowledge is very hard to define. It has been defined, or people have tried to define it in many different ways. I'll just be focusing on one particular, very simple and intuitively very attractive definition of what knowledge is. It happens to be the most well-known definition available in so-called analytic philosophy. Knowledge is justified through belief. So that means that in this definition, we have actually three conditions that need to be met for something to qualify as knowledge. You need to have a belief as opposed to, let's say, a desire or something that is not even a possible mental content at all. That's not graspable. It needs to be a belief. It needs to be justified. That means that you need to be able to back it up with reasons. And there needs to be this justification relation between the reasons on the one hand and the belief that you have on the other hand. And to top that off, the belief needs to be true as well. Now, truth is even trickier than knowledge. But is it correspondence to the facts? Is it something like a coherence theory about how the belief that you are considering to accept for true, how that relates to other things that you hold to be true? That's so-called coherence theory. This definition of knowledge is neutral with regard to what truth actually is. We may have occasion to get back to truth in a moment, by the way. OK. Knowledge is justified through belief. Why am I giving you this definition? Because now we can proceed to see how using digital technologies, early internet, newer internet, machine learning, and things like that, how that relates to this concept of knowledge. Well, it used to be the case, as I explained a couple of moments ago, that the internet, early internet, was just a tool. Like, does anyone still know how to do long division, by the way, the sort of thing that you need to divide large numbers through this procedure? Did you ever wonder why it works? I wondered. Couldn't figure it out, by the way. Or the ABC formula, so that's still something that you know from secondary school for solving quadratics. Does anyone know why it works? In that case, I happen to know why it works. But it's a long time ago, I must say. It's just a tool that we use to reliably, reliably produce knowledge, in this case, mathematical knowledge, just like using a calculator or any sort of other device. And the early internet was just one more, let's say, neutral tool for us to use for our own purposes. Even if we didn't understand or don't understand exactly how it works, as in the case of long division, we are pretty sure that there is someone who can explain it to us, right? OK, sure. Actually, computers, using computers is a brilliant example in this connection. I'm not sure whether you are aware of the history of computers. So originally, computers, let's say, I'm talking 1930s, 1940s now, computers were human beings. So it was a job title, a job description. It was a person who performed a specific symbol manipulation, a computation, if you like. Alan Turing famously used human computers, so people having the job description of a computer, when solving the U-boat enigma code of the German Navy. So yeah, basically, a classroom of tables. There was a computer, a person, and people would pass slips of paper from one table to the next, and then each would perform a specific manipulation, or basically a manipulation, on the symbols that were on the paper. And that's how eventually following a set of instructions given by, let's call it the lecture, let's say, the Alan Turing in the room, that was the original idea of what an algorithm is. A set of instructions for a stepwise processing information. Now algorithms, computer algorithms, they are tailor made for producing knowledge in the sense of justified true belief. Algorithms are designed to guarantee that the solution will be found, so that truth will be there in the product. And because an algorithm is a stepwise procedure, think of rules that you program into an old fashioned computer, old fashioned AI, because it's a stepwise procedure, the algorithm provides what I would call the discursive articulation of the reasons behind getting this particular result. So justification is guaranteed. Truth is guaranteed. The only thing that's needed now is for someone to believe it. That's the human being, the autonomous human being originally, and you're already internet, early computer use, to make use of that computer's results. Beautiful. Follows the Enlightenment model. But today, we have what is technically called epistemic opacity. Epistemic refers to knowledge. And it has to do with algorithms, crunching big data, using machine learning. Big data-based machine learning is essentially this black box that I mentioned before. And that means that the beliefs acquired by the machine, beliefs stairquads there, but because they're machines, I'm not sure whether they have beliefs, they're opaque, so they lack transparency. You cannot look into it in at least two senses. First, it is impossible for us to know what exactly the machine has learned, what it is. What exactly is this giant web of associations that the machine has picked up from this giant set of big data? What exactly is it? It's in a way too big for us to know. Our brains cannot process that. So it's not a content of possible belief. That means that the concept of belief is actually compromised here. And secondly, it is impossible for us, even for the programmer, to be sure, because that's the whole point, even for the experts. It's impossible for us human beings to trace the reasoning behind the machine-produced Scarecrow beliefs. And that means that justification is impossible. It is as if you are offered to believe something. You're offered a bit of information, and it's that you should believe this. And the only reason why you should believe it is because it has been produced by this oracle, something like that. And that's where it ends. You cannot look into the oracle as it were. That's the idea of the black box. Now, I skipped the concept of truth here because I'm not sure how big data and machine learning affect truth, but I'm not optimistic about that. So asking about the truth of a specific claim that I cannot understand, that I cannot even grasp in the sense that it's too big for me to know, can that be true? Well, maybe in a very pragmatic sense, I admit that. When it works, then it's bound to be true. OK, but who decides whether or not it works? Just to mention one example here, think of the child benefit scandal for the past couple of years. It's actually still around. It was algorithms who decided which people would be fine and which people not. And we only found out that it didn't work when people started to complain. And there turned out to be a bias in the algorithm. Get the idea? So it's only after the fact, and then only when things go wrong, blatantly wrong, when blatant injustice is being committed, that we veer up and say, whoa, whoa, whoa, we need to look into that. But what about the other cases that we do not care about? Do they work? And if so, by whose standards? By our standards, human beings? Or by the standards of the machines themselves? Or, let's say, the service providers who operate these machines? I'm not sure about truth here. So we come at the crossroads. It's two options, but it only mentioned two examples of people defending these options. The American publicist David Weinberger, trained as a philosopher, but especially known as a prolific writer about whatever. He argued 10 years ago now. He coined this phrase too big to know about machine learning. And he argued that we should actually re-think the concept of knowledge. How important is that, you might wonder? Well, think of how you were doing a university program. In the program objectives, actually in the objectives of each of the courses that you take, the word knowledge, knowledge and understanding of, you get the idea, is mentioned. What does that entail then? And Weinberger thinks we should change that, reconsider that. You might also want to argue, that's the second bullet here, that what we need to do is, as it were, crack open the black box. We need to improve machine learning so that we have actually access to the reasoning behind it. I mentioned here an example, Heinrichs and Eichhoff. That's from the world of Madison. The use of machine learning in Madison is pretty strong. And you can imagine that when you use machine learning, when you use machines, and the machines advise to, let's say, predict the efficacy of a certain therapy that you recommend, or when you use machine learning to actually diagnose a person with a specific disease, you, as a medical doctor, want to know why, on the basis of what that diagnosis has been established. And if it's just a black box that you cannot crack open, OK, then that's not sufficient. Or at least, medically speaking, ethically speaking, that does not seem to be enough. So they propose that we should crack open the black box. I think that's totally naive, by the way. I do not think that's possible. But that's my personal opinion. Unfortunately, I don't have time to explain that. Let's move on very briefly, looking for Hanan, and see if... How am I doing time-wise? I can actually see the clock now. Yeah. Yeah. Let's finally move on to agency. Agency, the way I'm going to talk about agency. So one of these, this other key concept that's very central in the idea, in the Enlightenment model of human agency, the model on which all our legal systems are based, the models on which all our education systems are based, the models on which very much of our interpersonal communication and social intercourse is based. So what to expect from someone else? One does that person know? What can that person tell me? What can we share with each other? Second example that I'm going to focus on is agency. And the model that I'll be using to discuss that, you'll recognize it from the model that I used for talking about knowledge. But I'll add certain small points. So again, in relation between technology on the one hand, including computer technologies and the internet, and human agency on the other hand, people have been writing about this for 100 years. Very famously in Germany, Helmut Plesner and Max Scheyler start a tradition of what the Germans called philosophische Anthropologie. Is that still something that's taught here under that name? Philosophische Anthropologie? OK. Weisgeheerig philosophie. OK, Leiden dropped that years ago. So now we have, just like all the American universities and UK universities, philosophy of mind, things like that. I'm not against that. But especially Plesner started this tradition of philosophical reflection on how humans relate to technology. It was continued by many philosophers, including Heidegger, you see him top right. But also in the Anglo-Saxon tradition, very nice book by Andy Clark, philosopher and cognitive scientist from Edinburgh University. We are natural born cyborgs. We are the sort of beings has to do with the way our neocortex functions. It has to do with neuroplasticity. We are human beings that can naturally enter into a symbiosis with all sorts of tools and techniques and technologies that are a part of the culture. That's what does that do to us? It transforms us, even literally by changing our brains, Clark argues. But I'm not going to press that here. It transforms human beings typically by, let's say, boosting our cognitive powers. We've already seen that when talking about knowledge. You can do much better divisions when using the long division procedure or when using a calculator for that than when just looking at your fingers and starting to count and things like that just doesn't work. So it transforms us and typically enhances us. But it also transforms and typically expands possibilities for action. New technologies mean that you can do new things. You lose, you very often lose some as well so that it's no longer possible or no longer desirable to do certain things. But basically, it's expansion. Just to give you an idea of what I'm thinking of, think of the field of political agency, for instance. What has the internet made possible in the field of political agency? Well, we've got new political causes to engage with. That's about the content of the action that were, like digital copyright and privacy and things like that. But also new ways for politicians to reach out to the public as in political campaigning. New ways for traditional bodies and organizations, including NGOs, to engage with citizens for purposes of taxation. It's not a nice example, obviously. But also communication, disseminating information. It's not always very fortunate how that happens. I'm personally very disappointed about the way information about COVID-19 is handled on the internet, on the official websites of the government. I find it very difficult to find good answers. But that's a complaint that's neither here nor there at the moment. The internet made it possible for organizing or reorganizing all sorts of traditional democratic institutions can be used for voting. Not necessarily at the national level. It can be much lower as well, having referenda. And there's also new political activities that simply were not there before in that form. But that we now have. There are not always nice examples, like trolling and hacking. But it is new sorts and new forms of agency. I think that the early internet left all that intact for the same reasons that I discussed when talking about knowledge. So my prediction is, actually, there's just a bit of social science, armchair social science. I should do the research here. If you reached adulthood by 2005, 2010, how many of you was that applied to? Some but not many. Then you are likely to have developed a more or less traditional cognitive and moral profile based on pre-internet times. Life offline, as you might call it. You're also likely to trust the internet as a useful for a limited range of personal aims and purposes. Not across the board, but for a very specific set of purposes. So you're more likely than younger generations to act responsibly online, to be relatively immune to what is often called the dis-inhibition effect. If you go online, people are disinhibited. And they started doing all sorts of sending dick pics and things like that. No doubt there is differences across generations. My original idea was to move this slide up. Have this slide moved up, but I decided against that. Just to see what we have here. Are you familiar with the phrases Gen X, Gen Z, the digital natives, and things like that? Gen X developed their cognitive and moral competencies before the internet matured. I'm of that generation. Actually, I think by these standards, I'm slightly off-chart, because I was born in 1960. But I still identify with Gen X. Gen Y reached adulthood. Well, let's say somewhere between adulthood. Where is that? It's in 2005, 2010, and not before. How many of you are members of Gen Y? Not many. And the rest of you, digital natives, right? Gen Z, probably not Gen Alpha, because that's the latest generation. You were born into a world of sophisticated online services. So online and offline, that's totally integrated for you. I see that in my children, who belong to your generation, the way they are. I'm not saying that they're doing weird things. That's not the point. I'm not saying that about you either. But life online and life offline have become much more integrated than is the case for my generation, for instance. Some recent developments, but I'll skip these because they're too familiar for you. The digital natives grew up in an environment totally integrated, steep rise in social media, at last but not least in yellow, the deployment of big data, sometimes called surveillance capitalism. I'll use that phrase again. How to analyze the effects of all these developments on human agency? There's many ways to do that. I mentioned here some elements, some key elements of rational agency. This is just a model. I think you'll recognize it as something that is important for being an agent, being a white sort of agent in a normative sense. So to take only one example, you need to be consistent in your choices. You need to possess a solid identity. It shouldn't be the case that, let's say, on Mondays, on Monday you do this sort of thing for that sort of purposes. And then on Tuesdays, you radically change the course of your life and go to totally different things. No, that wouldn't make sense. That's not the right thing to lead your life, to plan and organize it. The fragmentation, that's the third bullet that you see on the right, the fragmentation that is going on when using online services, so different platforms on which you do different sorts of things for different sort of persons and often even have a different account name and a different password and things like that. That's fragmenting you. I'm not saying that it blocks the possibility of consistency. It doesn't do that, but it doesn't encourage it either. That's the point. When you do things online, the topmost bullet, you definitely have a diminished awareness of possible consequences of the things that you do. If you click a button, what's going to happen then? Often you don't even care or bother what happens then, a diminished sense of consequences. Filter bubbles, they compromise your access to information, and that means that the reasons that you base on this information, the reasons for making certain choices and decisions, they will be colored in a certain way. They could have been better. They will be suboptimal. Nudging by third parties compromises autonomy. I'll not be talking about this. Instead, I want to single out one aspect. I talked about epistemic opacity when discussing knowledge. You remember what that was, the black box phenomenon. You cannot look into it. That's a well-known phrase in the philosophy of digital technology these days and the philosophy of machine learning. But I want to introduce a phrase moral opacity or maybe even intentional opacity that is, as it were, the agency counterpart of the epistemic opacity that we see in knowledge. It seems to be an emerging mismatch between, on the one hand, the sort of profile that individual users have, and on the other hand, the conditions in society that should be in place for this profile to function optimally. The world is changing, the world in which we live is changing because of machine learning. Life, and as a result, I'd say, life is becoming morally opaque, especially in life online. But the two, especially for younger generations, are so much entangled. And I think it will be affecting both. Morally opaque in the sense that you as a user, when doing things online, actually no longer know exactly what you are doing or for which reasons you are doing it or what the consequences might be. So that's no longer transparent. And that means things like, very simple, often quite harmless in a certain sense. Do you use, does anyone use Google Maps for going from A to B? Probably you do. I use Google Maps for coming from London here to Tuber and finding the learning university, the learning university. OK, Google Maps decides which route I should take. I follow that blindly. But I have no access to the reasons. Maybe I could get access by investigating all alternative routes that I could have taken. Maybe they were faster, maybe not. Maybe there were roadblocks and things like that. But I don't take that trouble. That's too much trouble for me. I just trust the machine. So that means that I no longer actually know exactly why I'm doing this, what the exact reasons were. Nor can I reasonably be expected to know, because all this data crunching that's going on behind, it's too big for me to know. That was the epistemic capacity. The internet's architecture, now that it includes machine learning and big data, is simply not fit to support rational agency in the traditional form. Two very brief examples, and then I'll give the floor to you for questions and maybe also for answers. Do not expect me to give any answers. I mentioned that. So the way that all of us use the internet is typically that of sitting on a couch. I'm not only sitting on couches, but think of that. Being at home or being with friends or whatever. And then doing things on your smartphone or on your tablet or laptop or whatever. You're doing things on the internet. And that feels, so think of this couch image. Think of you sitting on a couch at home. That feels like you find yourself in a private space. But what you're doing on the internet is not private. It may feel like that because of this couchy fact. But actually, it has consequences way beyond your couch. Consequences that you are very often not aware of. Maybe you should have thought of some. I mentioned two examples from the social psychology literature. Huber and Stewart. I did it for the laws. You know that expression? Probably you do. I had to look it up, of course. And O'Sullivan opened to the public actually about 10 years ago, how other lessons blur the boundaries between private and public spheres. Quite often, you do things on the internet, thinking, feeling that you are in private, and you behave as if you were in private among friends or family. And then you know that in private, there is different standards and rules that you comply with than in public. In public, you're much more careful. Maybe in private, when you think no one is listening, you may blur it out that, oh, he's a fool. That, oh, she's a fool, something like that. You get the idea. You wouldn't dare of doing that in a person's face or in public. I did it for the laws. I did it because I felt private, et cetera. This blurring of spheres is one of the effects that I mentioned as an example of quite often you do not even know what exactly what you are doing and what the consequences are. This is only an example, blurry spheres. Second example, it's a big word coined by Sushana Tsubov in the best sub, not a very good book, but still some nice thoughts in there, definitely. Surveillance capitalism. Internet services, I already mentioned that, make decisions for us now. They're not just tools. They make decisions for us. That means in yellow that we are no longer personally and fully in charge of our own actions. We thought we were, because we think of ourselves as autonomous persons who chose to do this or that, but that's simply no longer true. And because of the fact that we are feeding the algorithms with our digital data trail, we are, that is being used by other parties, that means that everything we do, because it leaves behind a trail, even when checking in a public transport, or even when there was browsing for this pair of sneakers again that comes to mind, but whatever you do, there's a component in that behavior. So part of that is something that you're totally not aware of, but that has consequences in the public sphere, because you checking in or you browsing for a pair of sneakers will influence what other people will see on their computers, their smartphones, et cetera. We'll change the way in which other people are treated. That's not something that is part of your intention, as it were. That's what I mean by the phrase intentional opacity. Now everyone feels that something should be done about this. And there's two options that are most widely explored. The first is raise awareness. Tell the public the dangers that are facing. Teach them digital literacy. Teach them, it's very old fashioned work from the 1990s, teach them Naticat, how to behave on the internet. Okay. I'm all in favor of that, by the way, Naticat and things like that, but I don't think it will be very effective against elements that I sketch just now. Second option that's explored is much more ambitious. Put users in charge again by restoring the old fashioned alignment model by changing the architecture of the internet, or at least by enforcing internet service providers, or digital service providers, I should say, but by enforcing them to comply with new legislation. And obviously, I'm thinking of the example of the GDPR here. About it, it is the way that European Union, the most ambitious way in that European Union, try to, let's say, restore the traditional model of agents. Give agents access to all the relevant information on which to base a carefully made decision, whether or not to use this service, and then have them make that informed decision. Totally well intended. And it's the first step and blah, blah, blah. Cost a lot of money, by the way. But do you like it? Let's be honest. Who of you checks before clicking the consent button? Who of you actually checks all that? Who doesn't? Okay. The dozens are still a majority. Maybe it's changing or maybe it's my age. I'm not sure. And if you check these, if you do look behind the IXM button, you find there is quite often something like 500 pages of stuff. Who actually read these 500 pages? Probably no one did. And even if you read it, do you understand them? There's all sorts of obstacles there. And the thing is, most people simply don't care. Well, in the one sense, we are a bit alarmed and we say, oh, something should be done about that. But on the other hand, so when making daily use of the internet, it's way too convenient. Why bother? I wouldn't, I accept everything as long as you give me access to your website, et cetera. That's it. Thanks to, thank you for your attention.