 Good evening everybody. I'd like to welcome you on behalf of the Federal Agency for Civic Education, the Bundeszentrale für politische Bildung, and the Alexander von Humboldt Institute for Internet and Society. I'd like to welcome you to the third lecture of our lecture series called Making Sense of the Digital Society. The general goal of the lecture series is to help broadening our understanding of the fundamental transformations that Western societies are currently undergoing. What is needed at the moment, in our view, is an educated and critical reflection on the accelerating structural changes we experience, but also on the public discourse about these changes, and not least on the individual and political responses which these public perceptions seem to suggest. If we bear in mind that there are different ways of making sense of the process of digitalization, and that many, if not most of the ways that look convincing today, will seem short-sighted and parochial in retrospect. And if we bear in mind how fast the issues, narratives and terminologies change, it becomes clear that in order to expand our understanding on the digital transformation and its dynamics, we ought to take a step back. As you are perhaps aware of, the social sciences have developed various tools that are meant to enable such a reflexive step back. This is the reason why our lecture series focus on social scientists, which bring together their individual theoretical approach and their empirical expertise on various aspects of digital society. Elena Esposito, our guest tonight, concerns herself, among other things, with the issue of time. Time, including society's various strategies to anticipate and control the future. Given the predictive power that we, in the sense of critics but also enthusiasts, are at present ascribing to the new generation of algorithms, Elena's research area and approach are both highly topical. This is one of the reasons why we are very glad that Elena accepted our invitation. The other reason is that her work is not only very original and insightful with regard to the overarching question that we are dealing with here. Her findings are also pretty, entertaining and at times even amusing, as you will see. With these remarks, I hand over to our moderator, Tobi Müller, who will properly introduce Elena. Thank you, Shunet Hoffmann, one of the four speakers of the four research directors of Humboldt Institute for Internet and Gesellschaft for your welcoming words and for your introduction to the nature and aim of this lecture series, making sense of the digital society, a joint venture with the Bundeszentrale, like you've already heard. Quite shortly to the structure of the evening, which is basically three-fold, we're still kind of in the midst of the preliminaries. There's going to be, of course, the talk of our guest tonight, which will be followed by about 30-minute conversation I'll be having with Elena. Then it's going to be your turn. There's two ways you can actually participate in this evening. There's microphones in the audience here, I think two of them that will be passed around. For your live questions, so to speak, there's also Twitter wall, which is not going to be shown on the stage. That will be a little bit too distracting, we think, but there's two people that will take care of that and read your questions or comments from that Twitter wall. So we kind of go back and forth between Twitter and live comments. I'm sure you'll be wanting to make by that time of the evening. After that, it's going to be drinks and a little something to eat, be quick. It's gone pretty fast, the last two events have shown. We're also going to be streamed at various places in the internet, Alex TV, which is a local TV station here in Berlin, and on the respective websites of the Bundeszentrale and the Humbot Institute for Internet and Gesellschaft. So there's a lot of surveillance here tonight, so be good. This is going to be an evening of exciting opposites, I think, of highly charged concepts that seem to be maybe contradictory even at first. First glance only. Systems theory and transcendence, sociology and economics versus lofty divination. To break it down to a core, science versus religion. Our speaker tonight is in the top rank of current systems theory. And if you're a bit familiar with its concepts, if you happen to study anything near sociology, history or literature in the 90s or later, this is most likely. If you're a little bit familiar with it, you know about the excitement I was just referring to. If you read Niklas Luhmann or anybody of his school, you had to cope with the slight, but for many blissful humiliation that systems theory was not interested too much in the psychology of the subject, not even in its deconstruction. It was not about psychological systems as you know, but social systems. It was analyzing systems, not interpreting them. It was very far from reading the Bible. It was, let me switch to the present tense again, systems theory is the opposite of everything that is based on faith or even creed. You may ask, is that not the case with any modern day science? Well, in the humanities that would be at least debatable. So this is partly what our guest is going to do. Apply systems theory to prediction even to the art of divination. The Weisagung des Helsians in German are not algorithms a substitute for God, for knowing things that humans cannot know. But is knowing the right word for that? Do algorithms actually know things? Can they learn how to learn? I tell you that much. Our speaker is going to deny most of that. I think, however, she will shed some light on the role of artificial intelligence, AI or KAI in German Künstliche Intelligenz. On the role AI is playing in not just predicting the future, but producing it. With algorithms being the very efficient priests of AI, of course, and so to speak. She flew in from the beautiful city of Bologna. Today, 20 minutes from Bologna lies another beautiful city in the north of Italy, which is Modena. At the University of Modena, at the Giulia, she is professor at the Faculty of Sciences de la Comunicazione dell'Economia. I'm sure you know that much Italian. But she's also professor of Sociology at the University of Bielefeld at the heart of systems theory, so to speak. Even though heart might be too anthropomorphic metaphor for the concepts tonight. Her PhD, she wrote under the guidance of Leumann himself. After his untimely death, she also completed her habilitation at Bielefeld. She talked at the universities in New York and Japan. Her range of main topics of research is on one hand very broad and the other extremely fit for our series here. She wrote about fashion. That's not what I mean tonight, but you did write about fashion very formally. So, as we discussed beforehand, the connection between the paradoxical fashion in 2004, that was a long time ago in Germany. And even earlier book deals with memory and what nowadays might be called the right to be forgotten after the European Court of Justice ruled against Google in 2014. This book was also translated into Japanese, Sociales Vergessen in German, Formen und Medien des Gedächtnisses der Gesellschaft. We are getting a little bit closer to tonight's topic with a book she published in 2007, Die Fiktion der wahrscheinlichen Realität, again at Soberkamp. I'm quite sure we will hear something tonight about the difference of probabilistic tradition and the present of algorithmic prediction. So, our distinguished guest has written about memory, fashion, prediction, divination. There is this thread in there that Jeanette Hoffmann already mentioned and of course it is time. That's the thread maybe that is weaving through all those books I was just referring to. Die Zeit des Geldes in Finanzwert und Gesellschaft 2010. Shortly after the financial crisis of late 2008, she also forayed into the field where the gods are, contrary to popular belief, not the brokers, but probably the algorithms. Her paper tonight is titled, you can see it, Future and Uncertainty in the Digital Society. I am very pleased to welcome now from Moderna and Bielefeld, Elena Esposito. The stage is yours. Thank you so much. So much for the kind invitation, for the wonderful presentation. And I'm really, really happy and honored to be here, to be part of this great series of talks with such a, well, really fascinating topic. So, thank you very much again. What are we talking about tonight? Well, much has been said or anticipated by my, by the presenters. It will, we will talk about the Digital Society, about the temporal aspect, time aspect of the Digital Society and about the future. And if we talk about digitality today, the ages we are referring to are, as already anticipated, not most humans, beings, but algorithms. So, the starting claim, in the sense I would like to discuss with you, is actually the idea that the future of algorithm, what we are talking about, is to predict the future. It seems to be a recent development in the program of algorithm. Like Google or even the search engines or the most widespread algorithms seems to be devoted since some years, more to predicting the future than to dealing with information. Sort of the focus that changed subtly, but quite clearly in the recent years. And especially since the recent revival of artificial intelligence combining big data and machine learning or especially deep learning, the more developed and mysterious version of machine learning. I see a revival because artificial intelligence exists as you, everybody knows here since long. But the last 10 years it has been revived at the sort of winter where people were not talking about it anymore so much. And it really seems to deliver amazing, wonderful results. And precisely because of the combination of new to developments, deep learning and big data. And among the new promises of artificial intelligence, that is promise about prediction. Because algorithm now promise actually to reveal in advance what will happen in the future. There's a research area called predictive analytics which is explicitly devoted to this. Mining data to discover that this is the structure of the future, which has a lot of aspects. Do you discover structures, what are the structures, what are the people saying in this field? And the promises are actually glittering and many of them seem also have to remember. Some of them can be fulfilled actually. The ability to anticipate future trends, the ability to predict should help first of all to optimize the use of resources. For example, that was the first field where people work on targeting advertisements to the people who are or can be interested in certain products or services. Predictive shopping, this kind of areas, we've got a lot development, but also other more social areas. Like for example, in many cases finding out in advance problems or possible fraud in the bank field or preventing illness. Everybody's heard about precision medicines now and the idea that now with algorithm you could prevent illness or that's the promise that they make. But also some other promises, which seems to be realistic but very suspicious for us, focus in prevention and crime deterrence on people and group most at risk. Like preventing policing and these kinds of fields. So those are the two aspects of this dependent phenomenon, glittering promises but also worries. So the idea that the future can be known in advance is exciting from one point of view but also raises great concerns. And the interesting point seems to me that the concerns, the worries that we have with algorithm and prediction are not related only with the case that the algorithm doesn't work but also even more with the case where the algorithm work. Because on the one hand, one fears that algorithmic prediction can be wrong. So it makes fundamental mistakes and we don't get the prediction is mistaken. But on the other hand also correct prediction raise worries. The idea is that if the guidelines of algorithm are followed and if they are effective, that's the fear the algorithm prediction might lead to so-called pre-emptive policies. Policies will deprive the future of its open possibilities for all people involved, for the people targeted by the algorithm but even for the decision makers. I will go back to that later in my talk. So what I would like to discuss with you today is this both aspect. And from a slightly different point of view which already has been anticipated. So the idea that the enthusiasm about the predictive power of algorithm and the concerns about the consequences of this prediction are I think both legitimate and both motivated. The reason to have enthusiasm and great expectation and worries but I think they are partly misguided. Because I think that for good and for bad, algorithmic predictions actually very different from the idea of prediction that's most familiar to us. Which is basically an idea, a meaning of prediction which is relatively recent and established itself as we said in modern society since the 18th century. And the idea of prediction which is guided and oriented by the idea of probability and probability calculus as Toby already anticipated. The point is that when the forecasting agent is an algorithm and not a human being anymore as we are used to have then procedures and criteria are different and also the results of the problems change. Algorithm prediction allows to do things that would be impossible for human beings even if we are equipped with the tools of statistics. So that's a big opportunity but also raises different new conceptual problem we should be able to face. That's a general frame. So to understand what we are talking about, how do algorithm work? So the catch word in this field everybody knows, everybody talks about is big data. And the idea is that big data should inaugurate an area of the bound, a bound of data therefore big data. But not only that, many, many data and also virtually unlimited computing capabilities. Therefore big data have to be connected with what algorithm is able to deal with all this data. And the idea, I now explain a little the narration in this field. The idea that algorithm can collect and use all data about a phenomenon, the so called statistical universe. In the field they say the big data, they use all the data in the universe. And the number of the universe is all in this field. So for this reason that the claim is that they don't need to select samples. Usually we have the universe, we sample the universe. Algorithm don't sample anything, they use all the data, big data. Therefore because they use all data the claim is the algorithm should be able to provide certain and objective information. Well free from the subjectivity and arbitrariness of our procedures. Because algorithm, that's a claim, consider all cases that don't sample but also and above all because they don't need to refer to models or to theory to interpret these cases. Algorithm don't use theory, don't use models. The idea is they just discover, they look at all the data, look at the structure of the data and discover what they call correlations. Correlation is a big issue in the field of this kind of research. Because correlation should reveal the meaning and the consequence of a phenomenon regardless of any theory. Highly quoted everywhere sent by Chris Anderson back in 2008 on Word. With enough data the numbers speak for themselves. You don't need theory just to look, just have to look and see which patterns computers can abstract can find in the ocean of data. There's a big ocean, the ocean of big data, you look what's going on there, you look for correlation, you find the patterns and they're simply there. Just have to look for them. So that's why in this kind of discourse the results would be basically description, there would be statements not causal explanations as we're used to deliver. Well, as people say in the field, there is no need to know why it comes to give a result only what it is. We move from why to what or another quote from Chris Anderson, in the digital world correlation supersedes causation. We don't care about the causes, we just want to know what's going on. Of course, this is, you might have heard in my presentation, extremely controversial and extremely debated. Everybody is discussing about that. But my point is here is slightly different. My point is that actually most of these things are really happening, even if the hype is probably exaggerated. But my point is that even if the technology is extremely advanced, extremely new and amazing developed, the attitude underlying this discourse about big data is not actually so new. If you look at how people describe what you can do with big data, it's not something completely new. But rather it seems to me that algorithmic prediction, as it is described by big data theorist, actually symbol something which is not new, but actually very old. It sort of revives a very ancient divinatory attitude. Divination as it was produced in the Middle East, Mesopotamia, most of all in Greece, but mostly developed in very elaborated way in the Chinese world, where divination was a really, really important reference. So the idea was that in ancient times, the future appeared unknowable to human beings, but not to divinity, not to God. As today, in a sense, the future seems to be unknowable to humans, but not to algorithms. Algorithms cannot be future, we cannot, but algorithms can. And there are a lot of parallelism between the procedure of algorithm and the tradition of the practices of divination. For example, like algorithms, divinatory procedures were guided by precise techniques, which rigidly provided a number of steps to be taken without any bitterness. And in both cases, in divination in the algorithms, there are programs, like software programs, but also practical programs by divinatory practices, that, like scientific practices, do not want to explain or to understand phenomena just to deal with them. In ancient times, you didn't claim to understand phenomena, it would have been sort of heretic, in a sense. God knows how things go, you just want to know how to deal with them, you don't need to understand. And algorithms, we know, they don't try to understand, it's not their task. And that made sense because divinatory societies relied on the assumption that the world they had to face were governed by cosmic logic and by basic order. There was an order governing entire cosmos, but human beings, we are limited capabilities, we are capacity, we are not able to break this order. As today, we can understand the procedures of self-learning, of deep learning algorithms. There's an order, but we cannot grasp it, but still the world is there. And as for algorithms, in divination, the goal was not to understand the phenomena, but to get direction that would allow the person, asking for divination, to coordinate with a superior order. The idea was that the whole universe was articulated in a network of correspondences. Exactly the correspondences we found now in algorithm discourses. The ones that, for example, Michel Foucault described in the first part of the Morley Shoes, the world where everything was correlated with something else. And these correspondences could be captured identifying configuration and patterns, just the patterns that are the real topic now in algorithms in different phenomena. In the ancient world, the idea was, for example, that the walnut maple has the same shape as the human brain. The human face reproduces the maple of the sky. The foliage of three resembled flying birds. And the idea was that it cannot be by chance. There must be a reason that the pattern must have a meaning that we just have to try to get in touch with. So from correlations with divinatory technique, one could draw the directions on the decision to be taken or on future events. Because if you look at the patterns R in the world, you can learn from them, which is the best way to decide or to act. The point is that if you look at it, in this ancient world view, the idea of predicting the future in advance was actually entirely plausible. The assumption underline the entire construction was that the future existed and was determined. The challenge was just to get to know it. The image of time and the relationship with the future were very different from the one we have in modern society, even more in our contemporary society. In ancient time or in divinatory world view, the basic temporal distinction was the distinction between the dimension of God and the dimension of human beings. Between Eternitas and Tempus. Eternity, Eternitas, was the dimension of God of higher entities who knew all events. From the perspective of eternity, you can know the present, the past and the future. But indeed, from this perspective, the very difference between the past and the future basically dissolved. Because for an all-knowing God, all events basic were contemporary, all accessible, because all times were contemporary from this point of view of eternity. The difference between the past and the future was not the divine distinction, but pertained only to the limited perspective of human beings. We human beings live in Tempus and Tempus, we have this distinction between the future and the past in a present that immediately disappears. But the idea was in the ontological real dimensions, the unknowable future was no less determined than the past. Only we human beings cannot access this, we cannot know what will happen in the future, but it's determined, it's already decided. And therefore, divination from this point of view was completely rational. Because divination offered a compass of procedure and techniques that made it possible to glimpse, in a sense, this already determined future. The future existence was there, but we cannot see that. A divination offered techniques that allow us to see something, or have some indication about what's already decided and will happen in the future. So that's a way of seeing time that actually has its kind of rationality, that's a possibility, it's not irrational. And it's actually very plausible, but it's not our way of seeing time. It's not the image of time of the modern world and even less, as I said, of our contemporary society. Our concept of time has a very different way of seeing time in general and especially the future. And something we really are, for us it's really important, we wouldn't want to give up. So for us the future is open, the future is an open field which today in the present, in advance, cannot be known, of course, by humans. We cannot know the future, but not even God could know the future in advance. Because the future basically for us does not exist. The future doesn't exist in advance because it's produced by our action and by our present behavior, so that the future cannot be known for basic reasons. The future, in the description, is not a given, it's not a series of things or things already decided, but rather as Lumen and Kozellek described, is an horizon. An horizon that moves away as we try to approach it and therefore can never be reached. We cannot know the future as we can reach an horizon, it moves away, so it can basically not be grasped. This doesn't mean that we cannot be prepared for the future, we can do a lot to be prepared for the future, but not because it can be known in advance. What can we know about the future is not the future itself, but only in a sense the present image of the future. We can know, we can get more and more information about our expectations and the information on which they are based. Those are data, data that exist that are observable and that's something you can investigate to gather more detailed and reliable information to get better preparation for the future. That's how modern prediction has developed. We don't, nobody expects to predict the future in advance because for basic reason that's impossible, but prediction in modernity is rather the form of planning. The equivalent prediction is a way of preparing the present to face in a controlled way a future that is and remain or is uncertain. Because it's open and as an open future cannot be other than uncertain, as we see in the title of the talk. So since more than age, the tools that we use to address to prepare for the future is not divination anymore, but it's the calculus of probability that is exactly with uncertainty of the future, with uncertainty that cannot be overcome. And the calculus doesn't promise to know the future in advance. It doesn't promise to reveal today what will happen tomorrow, but rather to calculate the lack of knowledge of the present about the future. 40%, 27%, what we don't know about the future, but that we can know we can prepare to and that's something we should be able to elaborate in order to decide rationally even under uncertainty. And so it is something we cannot escape, but we can deal with that, we can deal with our lack of knowledge. Therefore, when we decide, the decision can be rational and good grounded even if the future remains of course unknowable. This approach, the approach guided by probability calculus and by the idea of uncertainty was and still is the basic of the scientific and technological attitude of modernity. It's the base of scientific research in a sense, which means by the base of the same attitude and development that now produce the most advanced techniques of artificial intelligence and machine learning. So that's the strange point because these techniques, artificial intelligence, machine learning actually use statistical tools, statistical tools derived from probability calculus. But now, as we've seen in the predictive analytics, now they promise to predict the future. So they promise to do something that basically contradicts the assumption of the open, unpredictable future. And that's a strange contradiction that I would like you to discuss with you tonight. How does this claim, the claim to predict the future reconcile with the ontological setting in a sense of the modern world, which is still in many cases. It's still a share. Or how are algorithmic prediction and probabilistic tradition connected and distinguished? If algorithm use statistical tools, but they promise to do something which basically is not compatible with the probabilistic tradition. They are actually very different if they use similar tools. And if you talk to the people in machine learning, for them it's absolutely clear that they do something very different from the people in the same department in the other room that still do statistics. And the tools are the same. Because in a sense, statistics wants to contribute to know the world by activating a procedure that is actually very similar to the classical Galilean method of research. You insert past data into the model and then use the model to predict future data. You check if the prediction is accurate, so you check the accuracy of the model and eventually correct it. So the way of using the past and testing the future in a statistical procedure is still devoted to a traditional goal, which is explanation. If you statistically want to explain the world using these procedures. For machine learning, on the contrary, the purpose is completely different. The purpose of machine learning is not to understand how the phenomena were produced. It's not to elaborate the model that explains the phenomenon. In many cases, if you work with algorithm, you do not even know if there can be a model. And the machine in any case operates without a model. The goal of algorithmic processing is not truth, but as people in the field explicitly said, it's just predictive accuracy. You do not want the truth to deliver good predictions. And this attitude toward the future reveals the fundamental difference between the probabilistic and the algorithmic approach. Statistics, as we saw, use samples based on a limited amount of specifically chosen data. In statistical samples, you don't use a universe and you do that in order to explain the statistical universe. In a sense, statistics produce forecasts on the average of the elements of subjects involved. That is, in statistics, you produce results that correspond to nothing specific and to no one in particular, but should help to understand the general phenomenon. So we know nobody has 1.4 children. But that's how the average produced by statistics and very much discussed sound in many cases. There are a lot of jokes about how statistical results are implausible, but they are very useful for us. Algorithm procedures do something completely different. They don't sample, as we said, they use all data, big data, and it's all, but they produce no general results. The average should generate, everybody should have a sort of aspect of the average. In the case of algorithms, it's completely different. They use all data, but the result is not general, it's the opposite. Algorithm claims to indicate what can be expected for a specific subject at a given time. On the basis of this correlation, they found the data. Nothing is general, everything is extremely individual. In this look at this aspect, again, we can see that the algorithm procedure basically, even in this case, reproduced a divinatory model. Because also divination did not respond to an abstract interest in the future, but responded to very practical questions. When one asks for a divinatory response, one asks questions like, how should I, a particular individual, not an average me, should I decide how should I behave today to be the most favorable condition tomorrow? What is the best time to start a battle or to sow it? Will my marriage be successful? Very, very focused personal questions. And that's what the divinatory response allowed to decide. It was used primarily to make punctual and individual predictions. Divinatory prediction were always individuals. And also in predictive analytics, if you look at it, the purpose of the calculation is not to describe anything general, but to give an indication which should be specific and as accurate as possible. A quote from a book in predictive analytics, where as forecasting, a statistical forecasting, estimates the total number of ice cream cones to be purchased next month in Nebraska, predictive analytics tell you which individual in Nebraska are more likely to be seen with a cone in hand. So for each of us, what we will do, that's a claim of the, which can be fascinating and scary at the same time, of course, the claim of this completely different kind of prediction. And this is the main difference between the traditional statistics and this new development in machine learning and predictive analytics. Digital techniques, as I said, abandon the statistical idea of the average man, the average man or average person, human being, of which all elements of the population should be more or less imperfect replicas. We also more or less correspond to the average model. The new frontier of prediction or the new frontier of customization guided by algorithm would lighten something completely different in a movement, as they say, from the search for universes, which nobody cares about anymore, to the understanding of variability. Or a quote again, now in medical science, we don't want to know just how cancer works. We want to know how your cancer is different from my cancer. Or in the general sense, individualization trumps universes. No universe, everything individual now. How can it work? Because it actually works. That example is about cancer. You know that the most amazing successes are in predictive medicine. So these techniques actually work. In some cases, not always we go to that. But it basically, when it works, it works because algorithms are themselves part of the world in which they operate. They observe the world that they deal with the world from within, not from the outside referring to the model. That the difference, the model is something outside you put on data and the algorithm is inside the data, works inside the field it describes. And this changes the meaning of prediction. When algorithms make prediction, they don't see in advance an independent external given. They don't see the future with given independent of the algorithm. They don't see the future which is not yet there. It would be impossible. Algorithm, as Dominique Cardon says, manufactures the future with their operations. Therefore, algorithm that's the claim can anticipate, can predict the future. They can predict something which is already there because it doesn't exist. But they, in a sense, algorithm can see the future that will be there as a result of the intervention of the algorithm themselves. I'll make some example. So the prediction by algorithm, as I said, are individual and are contextual. They refer only to the specific item they address, which individual in Nebraska will eat an ice cream in August or what it is. For example, the algorithm used in predictive shopping, they do not say how consumer trends will be in the next season or they don't see which product will increase or lower the market share. They don't refer to trends in general. As I said, all individual contextual things. Instead, they anticipate which specific products and a specific individual consumer will be willing to buy. And as we know it, in many cases they predict this before the individual itself chooses them and in many cases before he even knows of these products. The person is not even aware of its need, but the algorithm produced that need and then satisfied the need, in a sense. And we know how it works. As I said, I might not be aware of this product. This product exists, but the algorithm can identify this. I sort of exaggerated the idea. This product is something competing with my features and my past choices and the past choice of the famous people similar to you. You know how it goes with practice. They have data about me and they compare it with people that for some reason the algorithm finds similar to me. On the basis of criteria, which are often unscrupable. We don't know how the algorithm decides that, but they gather all this information and very often they are right. And we don't know how the algorithm becomes the decision, but in many cases it works. The suggestions of the algorithm are not like we know now with Amazon that you like this book and they propose a book which is much too similar. So it's completely useless in a sense. But now the algorithm do a prediction which are much more surprising and potentially informative. For example, the user bought a Barbie doll and the system offers an adventure travel to Morocco or something completely different. And me, the person didn't even know that this kind of travel exists. But apparently, if the prediction of the algorithm is correct, the person buys this travel. And if the prediction is right, it's not, as I said, because the algorithm saw the future in advance. Also, because in this case the person didn't know about this travel. So it wouldn't be possible to see in the future that the future would not exist without the intervention of the algorithm. So the algorithm suggests the project to the future buyer and thereby the algorithm produces the future and thereby confirms itself. The algorithm is right because it produces the future that confirms its prediction or not, because not always people accept the suggestion of the algorithm. But in that case, the algorithm learns. If I accept the suggestion, the algorithm is confirmed and okay, he's right. But if I reject it, the algorithm will learn from the experience and in any case may the best use of its resources. And the idea is in the broad idea of prediction happen not only in this case of predictive shopping, but in all other cases. And especially in the particularly scary case of crime prevention. So the prediction, as we know from movies and so on, should allow to act before an individual at risk begins a criminal career because the algorithm can profile the people at risk of committing crimes and you can know it before this acts. So the idea is in this construction that the algorithm, they can guess right or can guess wrong, but often they guess right. But in any case, it should always be effective, not necessarily right but effective because even when the anticipation algorithm are not realized and people don't follow the prediction, the algorithm that's acclaimed should offer the best possible prediction given the available data so that even the failure of prediction should contribute to improve the future performance of the algorithm. So the algorithm are maybe not always right but they always effective. That's the claim. And actually it sounds maybe fascinating or not but the point is that's not always the case. We have a lot of research showing that's actually not necessarily what happens. Algorithm are not even always effective. For example, Katie O'Neill in a book that probably maybe some of you already read because it's very much discussed, this weapons of mass destruction shows how in many cases it doesn't work. Algorithm are not effective, not only because they are wrong but expected because they are right. So the problem is even when correct, algorithm prediction can prove ineffective. What Adrian McKenzie calls the production of prediction affects the effectiveness of the prediction and this can lead us to self-fulfilling particularities like in the case I buy the product because the algorithm proposes to me but also at the same time as there's a negative size to so-called pre-emptive policies which limit the future possibilities for all the people involved. And the reason is actually quite basic because however refined your techniques and your tools you cannot see a future that depends on what you do following the prediction. So in a sense about the future they produce, algorithm are blind, are and remain blind. And this is the dark side of the performativity of prediction which actually also reproduced a well-known circularity of divinatory procedure. But in this circularity in the case of divination was an advantage in the case of our use of algorithm can become, risk to become a very serious difficulties in the use of algorithm. I think for example we think about divination of the case of adipus. The example of adipus shows in the clearest way that divinatory responses tend to be self-fulfilling. Everything that adipus did in order to avoid the pre-announced outcome contributed inevitable conclusions. He tries everything he can to avoid it but in the end he gives his father and he lies with his mother. That's unavoidable. But in the ancient world it made sense because this inescapability of the prediction confirmed in the ancient world view the existence of a controlled and predetermined higher order. The order is not up to now to decide the order of the world. It's already decided, the future is already decided and everything what we do, know or not, will confirm something which is inescapable because it relies to be higher order. The future already existed in the present, it was already decided. Even if we humans don't know it, we have to face uncertainty. But we live in different semantics, we live still in an open future and in our semantics this circularity results in actual problems. In feedback loops and very often in a serious inability to learn which is the main problem of algorithms. Again, Katie O'Neill says, algorithms are tools for behavioral modification that confirm the findings of the reality they create which means that the algorithm only see the reality results from their intervention and the problems that they do not learn from what they cannot see because it has been cancelled by the concept of the algorithm. They cannot see the world without the intervention of the algorithm. I'll make a second example. The use of algorithm that would be the problem produces a kind of second-order blindness that really affects the way we deal with these tools. Therefore, and that's the reason why the difficulty of algorithmic prediction are actually different from the difficulty you would have with statistical forecasting. The problems of algorithm do not depend on sampling problems on data shortage or on the use of wrong or misleading models, like in the case of statistics. Algorithms don't care about that, they don't have these worries. They never have data shortage, they have big data that would be acclaimed. They don't sample right, wrong because they don't sample at all and they don't use wrong models because they don't use any models. So the difficult algorithms are not the classical difficulties we know but they are different and not necessarily less worrisome. They depend on specific problems of machine learning and in particular on the way algorithm address, which is our topic here the relationship between the future and the past and the present. Algorithms, as everybody knows, have seen how machine learning worked. The worst are trained by maximizing their performance on a set of so-called training data. So they learn from this data, they come from the past and correspond to the available experience of training data. But what they have to predict, they predict the effectiveness of algorithm depends on something different, depends on their ability to perform well on different data. Previously I had seen real data that would be the object of the prediction of the algorithm and the real problem of prediction is that training data, the algorithms learn very well and real data are as different as the past is different from the future. But algorithms only know the training data and the difference between the two sets, training data and real data gives rise to a number of difficulties which we often have to keep to face and that's the basic problem, the more serious problem of algorithm prediction. For example, algorithms tend often to learn so well the training examples that they become blind for every other item. For example, the real example I get from the list on the feed, the algorithm learns so well to interact with right-handed user, with right-handed user to be trained with that he doesn't recognize a left-handed person as a possible user, typical case of problem of algorithm. And the problem in this case is called overfitting. And overfitting has been defined as a bugbear of machine learning, the big, big problem that really everybody has to face is always overfitting, some version of overfitting. Closing, how can we say about that? How can this condition that the past is different from the future never that you want to predict and that the algorithm become blind by their own effectiveness sense and how can we address them? How can we say on a theoretical level about these problems? Not the old solution but of course we can say something. First, I think part of the problem that the learning algorithm, self-learning or deep learning algorithm, they learn, of course they learn a lot, but they do not learn to learn. And this learning to learn is often the fundamental component of empirical learning and the basis for our human ability to generalize. And this produces very concrete problem. For example, I make an example about predictive policing and I use an argument by Bernard R. Cour about the concrete cases of using policing to profile person to reduce crime, especially in Chicago. In the United States, the programs are already quite widely used. And R. Cour argues that if profiled persons, the person the algorithm finds are less responsive to policy change than not profiled persons, then concentrating crime prevention measures on the people at risk identified by algorithms can be counterproductive. The point is the algorithm profiles persons, rightly, probably the person at risk to commit crime. But in many concrete situations, you think about that Chicago, about this privileged portion of the population, the algorithm profiles persons that actually, even if they are at risk, even if you have measures preventing crime, they can reach in their behavior because often they have no choice. So they are so bad off that they commit crime anyway if you try to prevent the crimes. While at the same time, while you are following the indication of algorithm, other areas of population where possibly surveys and prevention could be effective remain uncovered. And therefore, it has been shown, actually the algorithm were right in profiling this person. But if you use algorithm overall crime increases, especially was a case in Chicago. So crime increases because surveys has moved elsewhere and the mistake in this case is not to learn from experience, not to learn to learn. The mistake in this case is not to consider the experience that you have with your measures could lead not only to strengthen or weaken confidence and starting hypothesis for people who knows a little probabilistic that all these algorithms are basically Bayesian. So the prior hypothesis and the experience we use to confirm or disconfirm the prior hypothesis. But actually, that's what algorithm can do, but they cannot actually discover different hypotheses using experience. That's what Cate O'Neill against criticizes. She criticized the pre-emptive effect of predicting policing claiming that the problem is that people using these measures, they set the target in advance and they say, I try to eradicate crime. But thereby, they became blind to another possible thing they could learn. For example, they could use the information gathered by the policing measures to try to be the relationship in the neighborhood. They could shape the target differently out of the experience they have with their predicting measures. And that the algorithm is not able to deal with them. So in a sense, what we should be able to introduce in programming algorithm is something that we know very well. The idea that the experience of the past does not necessarily lead to expecting the future of the same or similar things, right or wrong. That was, for example, Reiner Kozellek in his historical studies shown very clear. His wonderful studies of the modern sense of time and the idea of the open future and actually shows that our modern sense of time goes in a different direction, expecting the same future to happen is the opposite. We could say that the more you know the past, the more you can expect the future to be different. And that's what happened in modern society. The study of the past opened the future. Historiography developed together with the idea of the openness of the future. A complex knowledge of the past can actually lead to expect unpredictable aspects of the future. And this something you can expect, something which actually shapes our relationship with the world with the future. And the idea of the past appears to anticipate surprises, not always to anticipate things that we already know. That's the problem also with the comparison with the parallelism between divination algorithms. Because digital prediction works in any comparably more complex, reactive and unstable social environment than divination. In divinatory semantics and in the ancient world view, the idea of predicting the predetermined future could be plausible. But in our society, and in modern society, there are more in our digital society, the intensity of communication is such that any prediction, even more a correct prediction actually, is anticipated, commented and reworked. And this produced new unpredictable complexity. The prediction makes the world, it tries to predicts more complex. And this complexity is that something that prediction cannot actually know in advance. Therefore, I think that also to overcome the specific blindness of smart algorithms, we need theoretical work. Well, the work that people are doing machine learning is doing is wonderful. We can learn a lot as sociologists or philosophers about that. But a kind of theoretical prediction is a type of reflection is really needed to deal with this development and take into account the complex range of possible consequences. Thank you very much. Thank you, thanks a lot. Elder for this wonderful and insightful talk about the difference between what algorithms do and what probabilistic prediction used to do, so to speak. Of course, when we, like in the next 30 minutes, I think we will mainly talk about the downsides of algorithmic prediction or what does not work as well as it should work maybe. Most of the things you mentioned already in your talk. You mentioned that you used a very big term, truth, right? You said algorithmic prediction is not about truth. It's more about efficiency. Or we feel like switch a little bit into the terminology of systems theory. You might say it's not about intelligence, but it's about communication, right? So algorithms are not like people, but they move within systems, so to speak. So maybe it's the whole concept that sort of frames this evening of artificial intelligence, maybe a term we have to revise. Should we rather talk about AC and that don't mean air condition like cooling this discourse, but artificial communication in that respect. Is it time for a new term there? Well, definitely it's time for a new term. Of course, I would be very happy to talk about artificial communication, but that's because I'm a sociologist. I propose something in this direction that I find that people listen to that. It's not a meaningless, but usually the way I interpret communication, it was at the beginning. I'm just a theoretician. So I use communication, the meaning of Niklas Luhmann, which is even in sociology, even in communication studies, not mainstream. So you wouldn't expect people in machine learning to accept something with not even sociologists accept. But that doesn't exhaust the meaning of your question, because I think that I would be enthusiastic to talk about communication, but that's the second step. The first step, which almost everyone in the field actually now accepts, is the artificial intelligence that doesn't work. Someone proposed to move to machine learning, but that's not so effective and it's not so sexy as an idea as well. So it's not difficult to find a different metaphor. But actually, if you look, there's strange, actually fascinating thing, but also strange thing. And then we talk of a revival of artificial intelligence in the last 10 years. And not only of intelligence, but also of clearly analogy to the human way of elaborating information. In machine learning, the models that are more effective are now neural networks. So not only the idea of consciousness of thoughts, but also like the hardware, the brain should be the model for that. So a really clear analogy to human processing. But it happens at the same time where, and the people also say that, the procedure of the machine are becoming more and more distant from human processing. For example, I don't want to make it too long, but I don't know if people use Google Translate or Translation Process. I do use them a lot. And now they are actually very good. They still make funny mistakes and it's nice to use them, but they're actually very useful. They were not 10 years ago. And everybody says translation programs have become effective and they can have these good results when the programs abandon completely the idea to produce machines that translate like us. So I mean, the algorithm translating like English into Chinese, they don't know English, they don't know Chinese. And the programmers, they do English but they don't know Chinese either, or they can make machine translated Chinese or whatever. So neither the programmers nor the machine know the language. And the way they produce these results is, what do you say, they could put in a formula. In all these fields, also algorithm producing text, algorithm communicating. That's not all in my observation. Everybody agrees that the machine can produce something that resembles very closely the products of human intelligence and they did when they, the idea of reproducing the way of, well, human intelligence work. Very often I quote this Blumenberg idea that human beings learn to fly when they abandon the idea that the airborne something would flap their wings like birds. So we can make airplanes when we abandon the idea they should resemble birds. So algorithms improve the less they know, so to speak. Yeah. Yeah. Yeah. That's interesting because I think there's other fields where I think that algorithmic prediction or, let's call it by the name, you mentioned the two collaborative filtering, but Amazon hasn't done for more than 20 years and what has gotten really popular on platforms like Spotify, of course, Discovery Weekly is one of the really most popular lists of all of the world now. It's called Dein Mixterwochen. I think in German it's entirely based on algorithms and yet I think it produces an awful lot of similarity. It has a tendency to modifying sounds, to elevator sounds. I do listen to a lot of music that might go down as elevator music, electronic music, some jazz music, but not only, but everything I listen to and I check it out every week has, to me, resembles music I already know. I don't know it by name but it produces a lot of similarity and it completely ignores what a good radio DJ used to do. It used to surprises. It used to produce things and effects and emotions that are based on what creative science calls serendipity which is totally crucial to innovation and I think that is not only an aesthetic problem because I'm a man of the arts it might also be an economic problem in the end. What would you say to that? That this whole, you were talking about similarity too as a concept and maybe as one of the prime flaws of a logarithmic prediction. Yeah flaws or not because as you said that's a good example of a case where I think there are many aspects but this worry shouldn't cancel the amazement about unbelievable results because I really would do wonderful wonderful or even what we don't like they are really amazing the machine can do that so they are really, really extremely effective but first I have to say something which can be a misunderstanding I think machine doesn't have to be intelligent and they work better the less the progress the better the intelligence okay but it doesn't mean there's no intelligence to work for example the example you mentioned as many others are going to work so well we could say because they are able, became able in the last year to in a sense to parasitically use the intelligence of the users. Well crowd sourcing so a good example of that but think about for example the first example is Google it was much more effective than the other search engines like Yahoo and Alta Vista I can remember that because Google learned to switch the idea so the machine was not intelligent itself but you know everybody knows how page rank works so the way I don't know what works not at all but the basic logic page rank looks at links so the result will go higher in your ranking the results are the one that were more linked or backlinked by the people there so it means the algorithm the algorithm is not intelligence but learns from what the people do to give us the best results the previous search search engines were based on semantic trees the machine assess understood what was going on and gave the result out of this meaning of the things and Google gave up completely that the machine had no semantic trees at all and learned everything from the behavior of the user that's an example of what we were saying the machine is not intelligence very stupid but it's not extremely intelligence because the machine learns that's the same in many other programs to learn the sense to learn from the behavior of the users what the relevant points are so that's why you need big data you need something to give the machine in the web clues on differences which are produced by human beings there's always or there is something happening in terms of resistance with some people called obfuscation in the net right that people like things they do not like actually that they pretend to be shopping something they don't want like on Facebook they would like the AFD here in Germany where they actually don't actually play with the machine they call it I'm not sure it's not a mass phenomenon yet but it's still an option people have of kind of obfuscating their traces obfuscating their profile preventing big machines from knowing more about them than their partners do you know that's well-known examples how does algorithms cope with that actually with the subjective system theories not that much interested in but there's still the possibility of actually resisting to leave true traces no well actually system theories not specifically is not interesting subject of course that's the base reference for everyone is not the object of sociology the idea is if you want to recognize the primary role of society outside society what the means are not relevant extremely relevant so relevant the society cannot determine them but that's because it is a direction but obfuscation is a great topic it is a great topic because well first the problems people react to are not produced by the algorithms but obfuscation is particularly used in privacy debate because as you know the real problem you have to face can gather information not only out of information but of data which have no meaning for anyone so the problem is not only that algorithm knows what I wrote in the web and asked for sense for or not the real problem the algorithm knows a lot a lot about me that I don't even know because it traces out of my GPS localization that people know so that's the real problem and this data shadows people say that everybody has in the web and that's the real problem and obfuscation is so good because it works to a certain point but the idea is so clever because the idea is that in order to escape being profiled in the web you don't try to hide your traces you produce more that's how the visualization works the idea I don't want to be profiled by the machine but I know it doesn't make any sense that I try to hide my traces actually you know anonymity anyway so the strategies completely shift the idea and react to logic to the machine which is not human logic we have too many data or it's a different logic so the way to escape that is not erasing the data but producing any time you do something on the web which is significant for you meaningful you produce thousands data which are completely relevant and then that's of course with real data from noise and so on and I think it's a very clever strategy but of course it's not so easy to realize so that's the limits but it shows that you have to think about algorithm in a different way you made this very nice very clean cut immediately sensible distinction or opposition rather between probabilistic and again algorithmic divination so to speak you said well probabilistic models really lie or want to determine the general average so to speak algorithmic prediction wants to know about the individual right he wants to know about your cancer not about cancer he wants to know whether you're going to eat ice cream not how many people are going to eat ice cream tomorrow in Nebraska Chris Anderson says the famous quote I think for themselves yet we know from certain cases that of course there are if not theory but then certainly biases that sort of are inscribed into the machine so to speak or in the algorithms like one example would be white males have been more successful in certain professional fields in the past so they're probably more successful in the future and that's sometimes what we do so how are we going to fix that bias bug might want to call it no that's a big practical problem because that's the biases well you know there's all this discussion of course they are completely biased completely not neutral at all even if they are not intelligent they don't think they are not neutral therefore everybody heard of this example of these sort of personal things produced by Microsoft like one year ago and it was cancelled after like a day because it was horrible sexist racial results but they produced the program that says were all liberal, all democrats they were really horrified but brought the algorithms so the algorithms are biased they have their own interest but even the design is not biased the algorithm is unavoidably biased because if we refer to what's said before the algorithm is not intelligent in many cases they are not and they become intelligent using the data they get they use the intelligence of the users so we are biased and the fascinating aspect is that the web is biased also in a strange way most are followed much more than the other ones hate speech is more successful on the web so the discourses on the web are not only biased probably biased more than our out of digital space discourses but that's the data the algorithm have how would you expect them not to be biased of course that doesn't mean that we cannot do nothing about it there are people trying to fix it there's a lot you can do to they cannot be controlled for example Google can very well protect pornographic content you can't have filters on the web but the idea that the algorithm are biased I think there's something you have to accept as a given and try to find a way to solve the solution because we are biased but if I'm informed correctly you mentioned Google now it's a strange algorithm and I think that is an algorithm that is changed pretty much on a daily basis as I'm informed and of course it's not changed by one engineer but by myriad a number of almost of engineers so you can't change those things actually or would you say that the algorithms are just pretty much a mirror so to speak of what's going on in the web or can you interfere with that as an engineer and the algorithm in Google is changed every day and it's also kept confidential because there's a search engine optimisation what everybody tries to do and so yeah that's a cat and mouse game so that's unavoidable but as far as I know the logic, the basic logic of Google that the algorithm learns from the behaviour from what's going on in the web is not going to be changed but of course you can do a lot to avoid the worst results also but all is expected in this logic you cannot teach the algorithm to discriminate understanding but you have to you can teach the algorithm to discriminate some things but only the things you told that's a problem because these things can work but the algorithm learns to do only what you decided it has to learn and they learn very well but as you said they have to deal with by themselves or if they do them and in the case of discovering the faces on the web they produce a lot of meaningful things and a lot of garbage as well Another downside that might be put into a formula is the notion of time I've been talking about a lot already this evening let's say the assumption that algorithms are do not like social change or it's hard for them to take into account that social change is something that might be happening you were talking about something like a retro future that the present future is the past future that what algorithms thought was going to pee the future actually so we're sort of trapped in this model at least in some sort of vicious circle quite heavily at work in again the artistic field where we have a lot of retro phenomena we have a lot of social retro phenomena dominating politics actually wishes that go back to a past that probably never existed this would be one way to define retro mania so to speak so are algorithms another driving factor to that retro mania that some of us feel in the social fields well that's Jeanette mentioned fashion and the fashion you know vintage is a big phenomenon fashion but the interesting thing is fashion has always to be new the meaning of fashion has to be new and the interesting thing is that if you vintage is never actually if you have always been wearing these things vintage is like rediscovering the old as something new again so it's something we do but the algorithm can really very well deal with that so the problem algorithm is something even more basic as you said for algorithms if they they predict the future as a sort of projection of the past and that's what they can do they can do it very well but basically not much more than that now but if you talk about machine learning they are aware of that and I'm confident they can for example the problem of overfitting which I've been mentioning in my talk that's it if the algorithm is very good learn very well it's very effective in learning with example you gave to him it becomes completely stupid for everything which is different like you are extremely biased and there are some distinctions that are circulating in the programmers is how to deal with the difference between bias and variance so a very, very skilled algorithm is extremely biased so it doesn't work very well but if it's only a lot of variance that means it forgets everything every time and starts fresh every morning it's not biased at all but it's completely stupid it's not what you want or anything but they are aware of that for example they say one of the discussions going on in the field is to find a trade-off between exploitation and exploration so exploitation uses the data you have to know them as good as possible but still be able to explore something different the more you become an expert the more you are closed you know that also human beings deal with that they are working on that is that a correspondence so to speak to what you talked about in the context of pre-emptive policing like this right hand user bias I just elaborated on now would then correspond to the criminal that will not have a choice anyway but to commit the crime the algorithm chose for him to do so to speak or to predict or ignoring the unstable the people on the brink the people on the twilight is that something an algorithm is able to learn or another kind of more ethical question would be do we want that do we want the perfect algorithm that actually predicts or actually produces people's futures well actually that would be I don't know if it's feasible but the algorithm well not more perfect but more flexible so the idea would be the problem of this predictive policing algorithm is not that they were wrong the problem they were right what the algorithm the people the algorithm profiles as people at risk are actually people at risk and we wouldn't have found them the algorithm has quite have a lot of fantasies the problem they are successful what they do in sense but they are not effective because of something algorithm couldn't know when it was working because it's something which is produced as a consequence of the algorithm for example the people you focus a prevention of these people at risk and they don't change very much because they cannot but at the same time of course you don't control other people and they would have changed because they are not so bad off like crime increases so I think the direction the direction you are working on is so to make the setting of the algorithm more flexible in Bayesian term you have this prior and you learn if the prior is confirmed or disconfirmed okay Bayesian you go on but in a sense integrate that with an ability for the algorithm to learn from what happens as a consequence of the algorithm are actually possible criminals but they say now what's happening in the rest of the world which you couldn't know before using the algorithm so it's one thing to ask if algorithms may be able in the future to lose their bias so to speak to become more flexible to allow for more openness what actually used to be an integral part of our notion of the future in the past those are big questions to raise but there's also another maybe totally opposed way of confronting that problem just in case that should not happen which would be always the eschetic approach so to speak of not knowing about the future of not letting the algorithm predict or produce what you are going to shop for what do you think is that going to be an option or do we just have to rely on the engineers to actually produce better algorithms that do not have those philosophy I've been just talking about I don't know but I think the one solution and one way doesn't prohibit the other one for example the legal field there's a lot of talk about it actually prohibited things I'm not an expert on that but I think it must be done we do it in other cases in the case of algorithm why not the point is just to find a way of some things will be rightly forbidden with the algorithms but the real point is to find out which ones and how which is a great idea with very bad results it simply doesn't work not bad results it simply doesn't work because the European norms were guided by absolutely right principles but the algorithm word is so complex that basically I say it's not so simple but I say one just one thing the result the first result of you try to be forgotten and say you claim you should forget me on the web the first result of course that you remembered that's the problem of forgetting because it's difficult to remember if you try to forget something focused it's like remember to forget and the results can't affect it and with the web it's even more so that's because if you take active part in forgetting if you can't renounce even forgetting or the notion of forgetting that's what I mean by the aesthetic approach that you actually do not take into account the algorithm for certain things at all and you just leave it out of the equation which would be as I said quite an aesthetic approach I'm not sure if that's realistic or even desirable it's just a notion I was not sure if the algorithms have sort of a different stance or different views on privacy than well the rest of the world has I guess where do you see the European perspective we always like to reinforce here or to stress or to ask about is Europe going to have any say in the future of algorithm engineering well I'm convinced we are discussing about it and I have the impression I already see well of course there's already a difference as we said in the attitude Europeans are more interested in the towards privacy and protection whether the United States have more like a free speech priority and not to mention China which is a completely different world of course not that they don't have worries of dealing with these problems but I have the impression that well now now with this new development machine learning what we've been discussing I have the feeling there's a widespread idea that more theory is needed theory in the sense theory is a general theory which is still more I think a European tradition and that's something that we should be proud of among many everybody talks now about fake news and there's a huge problem of course and a huge and interesting problem but the debate not only in Europe is shifts to the direction what you have been mentioning well these news are fake okay but what is real and what is true in this field and everybody working in sociology and communication theory can describe what happens in the media because the media are always false in the sense or not true in the but that's right they are controlled but not because they are truth true that's the way and the fake we know fake is something which also is a tradition that should be with a lot of theoretical discussion on that because we all know the difference between fiction and lie and lie is a lie actually not true but I don't lie and the debate about fake brings all this distinction on which we have a lot of tradition into this very practical field and that's something that I think we in Europe are much better keep to deal with thank you for that first 30 minutes of our conversation Elena now it's your turn actually from the audience and I would like to start in slide actually where are the microphones they're circulating already there's somebody just about mid to my right there's a right there please thank you so much for your exciting talk I'm Isabella Hermann I'm actually among other things doing research also on science fiction and world politics which is quite interesting because now we are arriving in the genre right so I would be interested on your thoughts on AI and democracy because actually there are people saying or this is also the spirit of Silicon Valley somehow that with enough data and AI you could optimize democracy in a sense of the well-being for society and actually this contradicts with our view on democracy which is more about compromise and about protecting the minorities and not this utilitarianistic view and I would be interested if you could share your thoughts on this issue thank you yeah that's of course you are mentioning very very important points and you are mentioning also implicitly if I'm not mistaken a sort of confusion in the field because when they say they are defending democracy but it's not our idea of democracy you are absolutely right and well-being is not democracy in a sense efficiency is not freedom so of course you are perfectly right but the debate about democracy goes in other direction which also include the United States and that's also something which is not so much if it's well-being or not but for example the debate about the so-called filter bubble that's if the use of algorithm in public communication spreads the algorithm work well the idea also the consequence of the ABC algorithm is personalized more and more and more individual so the result would be everybody can live in a personalized world which is extremely isolated so if the algorithm are effective they produce they give me what I really want and shape my media world and my digital world according to my perspective let's assume it works but then well I'm alone in my personalized world I have no contact with a similar personalized world of other people and that's well the meaning of democracy would really change similar world We have Shana Tovman please You said that when algorithms are successful they partly produce the future they predict and you use the example of shopping and predictive policing I've been thinking of another example that raises the question how would you explain when algorithms are not successful and I was thinking of the issue of micro-targeting this new technique of identifying voters that we that campaigners engage with in 2004 and now 2008 and 2012 people said observers said that Obama won the elections because of these new techniques and we were all amazed about the predictability of what voters actually do would they vote yes or no and who would they vote for it seemed these new techniques are brilliant at predicting the future and in the last recent election they were suddenly not successful anymore so if you say they produce the future they predict were they unable to produce the future they predicted in the last event or would you sort of use another explanation for this That's a great question a great topic well actually what Obama used in 2008 what actually were micro-targeting which was not basically relying on algorithms but the algorithms were used now in the 2016 American election and for example micro-targeting apparently one which used with success micro-targeting was Ted Cruz who didn't want the primaries but he was from Republicans in the beginning and he stood up for a while much more than people expected apparently because of a very focused use of micro-targeting so apparently these techniques are still useful but they don't well as we said are going to work but even if they are right they don't determine I think because as I said with the basic background we are seeing social reality is much too complex to look at how with the data we have now about the use of micro-targeting and how maybe in some cases they were effective with positive results or people react to that and produce something the algorithm cannot predict but also about that the recent political debate were also affected by something even different than 2008 apparently it was not there but results by P.D. Hoffman the people in Oxford about the use of chatbots they were not there in 2008 and apparently quite a lot was produced by chats with political bots in the debate in the United States and still we don't know how much they were effective but apparently in Brexit and in Trump election they were definitely active so that's something which sort of overlapped I saw you I think we're going to switch to Twitter for a second and see what came in under the hashtag digital society my colleague needs a microphone in the first row please okay so here's the first question that came up in the discussion also on Facebook and Twitter is there a link of the principles presented relating to prediction probabilities so the principles of quantum mechanics and statistical view point that was one of the first questions interesting another person was interested to know how our personal social media bubbles going to develop in the future of algorithms do not learn to learn another question and then is artificial intelligence always goal oriented if so it would have nothing to do with algorithmical prediction or algorithms are also related to artificial intelligence those are three huge questions I'm sure we can combine them into one but some can ashamed to say that some questions can be answered very quickly about prediction and quantum mechanics I don't know there are always talks about quantum models quantum computers in the field but how this affected predictive algorithm I'm sure that people could say something about that but I cannot sorry I forgot the other word was it then the second one there was a filter bubble that's the third right the second question was about social media bubbles and how they're going to develop in the future as you said if algorithms do not learn to learn partly I answered when we discussed the filter bubble I certainly had a question so the filter bubble are there that recognizes the problem and now they try what everybody heard Facebook tries to use algorithm also to sort of use the filter bubble effect so you should have your news feed not only creating what you did before but introduce some paradoxical planned serendipity in the way the algorithm work so there's apparently some awareness of that if it will work or not we don't know the third question was about you need the microphone right the third question yeah it's intelligence always goal oriented artificial intelligence mm-hmm no the question was more had more context it tells you of course not goal oriented we are discussing about serendipity and the ability to learn from randomness which is usually not a goal oriented but it's a little of the problem if that's if the question refers to that algorithm tend to be more oriented I said algorithm learn focused tasks and usually up to now they are difficult in generalizing it to other tasks so in a sense that's a lack of flexibility in this sense there's the gentleman to my left right there you go hi thanks for the interesting discussion I work in machine learning as a researcher and I'm quite curious about some of the points you made the first was regarding overfitting typically sorry regarding overfitting typically overfitting so typically at least for practitioners you keep a test set and a training set so you avoid overfitting about the issues you mentioned you have a test set and a holdout set so you know the performance on data you haven't seen before in your training so that protects you typically against overfitting also in predictive analytics no rigorous practitioner would ever say that I'm going to predict this is going to happen it's always a statistical or probabilistic interpretation to predict an ice cream cone with 90% probability so that being said I was really not sure about the point you made about the potential drawbacks of predictive analytics because it's traditionally common not to overfit and secondly you're supposed to be careful in not making any concrete claims because you can't on what the future is going to be assuming that the future data is drawn from a similar distribution to the past data then you make statistical claims but never anything as solid as this is going to happen yeah very good question about these points overfitting I know you're perfectly right people are aware of that and they try to deal with it my point was not that the problems and the general attitude of machine learners are different from the one you have in statistics because overfitting is not the first problem you have you may probabilistic calculus but it's the main problem you have if you make machine learning and of course people are very well aware of that and they try to cope with it but first they try to cope with it with partial solution the solution which was everywhere that you avoid you sort of cancel the problem overfitting overfitting is always as I said the back bear of machine learning always a problem that of course they face they deal with but they cannot eliminate but about your other point that people claim don't claim to predict with certainty but with a certain probability they claim to know 100% what will happen but they claim to they focus on the individual not on averages so if you look at the average you act in a different way if you know that a particular individual is 90% likely to commit a crime or to buy a product and so on you act what you wouldn't do on average on the whole population so you go to this person to offer your product that's why predictive analytics so it said performative effect or performative consequence which is much more than what you would have the statistical trends and that's what make the possible preemptive and dangerous consequence of this kind of attitude different not that they claim they are 100% certain but they focus on someone specific so we have about 10 minutes to predict that we're going to end that's 10 sharp there was a gentleman before that in the fourth row, fourth in to my left and then it's your turn gentleman in the second row third row this would be maybe a fast question so I might not use up all our time you started in the beginning with the story of Oedipus and then kind of Greek or ancient idea of the future and so there was no change the future was closed it was the way it is and then we have a modern idea okay the future is open anything can happen I'm American I know I can be a millionaire tomorrow maybe and I wondered if in all your research and you mentioned many times in your talk about the amazing results of machine learning and I just wondered maybe this is a slightly personal question which is for you the future radically open is it radically closed is it somewhere in the middle or another way to ask the question and I guess this sort of been in the talk will are all these algorithms showing us that in fact the future is less open than we moderns thought yeah again it's a very good question I think that's the idea the open future that anything can happen is actually more a fantasy it's not what the idea of the open future shows because the open future is just you cannot predict the future because it depends on what we are doing today but actually the future will be the result of today so it's open but not completely unstructured it's not the future that's or anything everything what will happen in the future depends on what we do today with very very strong constraints open doesn't mean completely without structure or rather the opposite the future is open just because it's a result of our present action because it is structured that we cannot know how it will happen but it won't be completely fantasy well Shekel has a theory well opposing imagination and fantasy when imagination is what you can sort of produce for the future starting from how the future is surprising out of what would happen today which is not complete randomness gentleman in the third row to my right to your left so yes thank you for this lecture which has very high philosophical impact but I'm Alexander Spies from the pirate party and I have a very special course we have two politicians in Berlin Mr. Heilmann and Mr. Buschkowski who plan complete surveillance of Berlin with 20,000 cameras yes and the software which can predict or which can report in time any crime to the police what will be your advisory for this guys be careful well I wouldn't say these things are completely useless actually but yeah what the meaning of I talked yeah they have a lot of promises and they often fulfill the promises but thereby they don't necessarily solve the problems they produce other problems I don't know the political situation the political decision are connected with so many other factors that of course it's not a point but whatever you decide to do be careful there's another question or comment right there so we are talking about the digital society and I would be interested in more this human aspect of the topic so how the whole algorithmic landscape affects us as human beings does it kind of change the impacts the way we think and how it's going to develop how we're going to develop as a society and our interaction with the machine and also what does it does really on this deep psychological level how the algorithm affects us thank you well we are closing with a huge question of course but I just say something about it what words me or bless me well you say will algorithm as a new media affect our way of being human beings or relating to one another of course they will but by itself it's not whether they are entering a new world because we have always been affected by the media of our society we couldn't think without the way we think is heavily dependent on not only language which is social but also by the possibility by writing writing was a huge revolution and by the print depressive mass media the way we think we think who you are we reflect our self we articulate our thoughts how we can reflect and so on since ever always dependent on media and from this point of view which is the basic to easy answer of course algorithm we changed that but that's something we know a little more focus question is of course with algorithm that's the fair about the old discussion about the touring test and the fair of the singularity and the idea or Catherine Hayes says we might be moving to a post human society because for the first time which was the privilege of human beings dealing or producing information now there are machines that can do that and that's more on the point that's something we changed our way of being human maybe in a deeper sense than what all media did and of course the debate is open many people work in theoretical theory talk about hybrids and so on my inclination then is to think that well I'm not worried about singularity at all I'm not interested in touring test at all because I've been touring test but the machine had had passed the touring test they passed every day every time because people very often communicate with bots without noticing it when we book a plane when we book many many cases when we play video games and so on many of our partners are actually bots and we don't even know we don't notice it and in many cases we don't notice it so that's already going on and that's not something that really scares me but about the role of human beings I have the impression at the discussion sort of showed that the role of human beings remains in a different way but more and more fundamental I don't have never had the impression that something's hybrids are not mixing the humans with the algorithmic algorithm work so effective and different which is the intelligence comes from human being so algorithm are apparently intelligent because they are not because they are more similar to us because they are more different from us which means that I personally am not particularly worried about human beings losing their identity but of course their relations become different I have two small very small closing general the second one is personal when I said we're going to close this evening at 10 o'clock I knew this was not going to be true I was taken into account my own sloppiness and I am Swiss so three minutes late is a terrible problem for me but I've learned to live with it I've learned to take it into account my own biological algorithm so to speak actually made a right prediction although lying about it in public algorithm something they are not very good at doing at being imprecise at being sloppy at making mistakes at the end sure but that's an old problem in a sense we think about before our now new web related word a big problem of programming but I'm not an expert but talking to people was always produced like random generators machines are built to be to not recognize random to be reliable we don't expect a machine to make a surprise when a machine surprises us we say it's broken not it's creative so as it starts to behave intelligently it's broken actually according to our traditional idea of machines machines should be completely reliable not surprising if a machine is surprising my car doesn't start it's not creative it's broken but now machines are doing something completely different so we are trying to teach machine hows to be I was not trying to sell a little delay here as a creative act so to speak I was just saying that I was taking it into account to my own prediction I guess I'm a scientist of communication theory here in January Christoph Neubacher we talked a lot about social media too and then I asked him well but you don't seem to be very active in Facebook neither on your professional profile nor on your personal profile and we had an interesting discussion closing discussion from there since you wrote about fashion already we've mentioned it a couple of times already I'd like to ask do you do all your shopping online in the streets do you sort of obfuscate your traces when you do shop or don't you how do you handle that that's real the personal level but actually about fashion that's true I have been working on fashion I'm very fascinated by the topic of fashion but disappoints many people I worked on fashion in the 17th century so I don't know what's going on now if Italy was going on Armani and Gucci and Prada I actually have a very big idea of what relates to fashion in the ancient meaning but this was not your question but the reality is a point of thinking what I'm really ashamed of I'm not on Facebook generational and we can talk about why so I'm myself but there's a general rule that sociologists tend to repeat that one is fascinating but feels sociology have the big advantage and the big liability is the research about everything because everything is society and the people tend to choose the topics where one is particularly weak so I mean myself I'm actually so I'm proud of the computer every day every time of the day but I'm not on Facebook I'm not very active in social media and I'm not particularly interested in fashion and if I buy something I do it in shops so you don't base your shopping decisions on collaborative filtering so to speak my fashion decisions are not a big of a decision and then they are not not quite sure to believe you 100% there but it's definitely an interesting and a personal closing notion of this evening I had the honor to present here along with those two agencies with the and the Humboldt Institute for Internet and Society and I would like to thank you very much for your attendance there's a little buffet now and some drinks we're having and thank you very much for coming all the way up from Bologna Elena, thank you thank you back to everyone