 I've been invited to present to you a little bit about digital humanism, and I've entitled this call, I did this talk, The Challenge of Being Humanely Digital. And since we're talking about humans, I thought I put in a few images of naked human people, and you'll see why. Let me just explain that this talk is, yes, it's mostly going to be philosophical, probably, but as I will explain, you might as well call it political, but actually it has an engineering objective, and while that may all seem a bit confusing, I hope it's going to be clear very soon. And I'd like to start with a picture that I always show in my presentations, picture taken by a friend of mine in Melbourne, actually, of a beautiful sunset that is watched by a group of people. And well, actually, it's not so much watched, it is recorded. It's actually very few people in this image actually watch the sunset. What they do is they watch their mobile phones mostly and take pictures of that sunset or of each other. And this, for me, is a symbol of how we live today. So we live digitally mediated. We live in a world that philosophers call actually the post-digital world, not because it's no longer digital, but it is actually so digital that it doesn't make sense to separate the digital world from the non-digital world. We always already live digital with very, very few exceptions. And as you know, this is a world that's networked, that's fast, that tends to be anonymous, it is ubiquitous and where systems have started to act autonomously, or at least so we are told. But of course, this way of living is one that has not really been planned so much. I'd rather suggest that this way of living has emerged. And certainly we have not reached its final stage. It is moving on and people like yourself in this workshop are keep on designing it. And I've chosen an interface design picture that I hope most of you will know. It's one of the most influential films, one says, that has influenced science and technology, minority reports, where especially interfaces played a big role. But really what the situation that we have today is one in which digitization or digitalization has co-evolved with automation over decades. And now we are in a situation where we have all these systems which don't follow a grand design. They do not follow a big vision anymore. Rather, this is something that is evolving. It's evolving very much in an evolutionary sense. And as you know, we are building these systems on top of each other. Very rarely do we design complete systems from scratch as designer. We are using whatever is available. And we are doing this very much in a local search kind of way. We are designing systems to make them better, to optimizing locally rather than having a grand vision that is driving our design of these systems so as to realize one big future. Has this ever been different? Well, probably not, but at least a few decades ago when you think back there have been discussions very much about that free information should be one of the drivers of a free society realized by the then new worldwide web where citizen participation would be at its center and there would be knowledge for everybody through digital access to academic knowledge but really to all sorts of knowledge. And nearly today this sounds like, well, not like a joke, but it certainly is a vision that we would say is no longer true. There was a big issue of Wired, the magazine, which argued that this vision may actually partially have been even one of the problems, but it's certainly no longer the case. It's no longer that we all believe that this availability of knowledge for everybody is something that the web realizes. We no longer truly believe that the web is supporting democratic processes in the way that it was once envisioned. Somehow something on the way has gone wrong. And all of this, let's say, is a situation that is not a standstill. We haven't reached a standstill here. It's still moving. And many people are concerned about the direction in which digital technologies are moving as a whole. And some of the reasons are, of course, that there is a feeling of disempowerment, that humans are becoming more and more the subject of algorithmic decision making to an extent that we now find this expression even in our legal system, policymakers have decided there should be rules about it. Many people are concerned about the de-objectification of societal discourse, that there is too much fake news and people believe it, that there is a few agencies or large-scale platforms that are driving online communication that have become the powerhouses of online discourse. Other people are concerned that, especially in Europe, we are losing sovereignty over what's going on in the digital world. We are no longer able to control and be that communication, but also building systems that are truly reliable where we actually know what's going on. And as you know, the current situation, especially follows a big debate about, for example, the degree to which Chinese technology should have a role in our digital world, in our European digital world, I should say. There is fear of controlling human behavior through digital systems. There is from surveillance technologies to the deactivation of online systems to controlling the movement of people through their mobile phones. There is a fear that the digital takes over control. There is, without any doubt, a very strong position of online platforms who have become economically so powerful that they have the research budget of not just small, but actually medium-sized states nowadays. And all of this, many people argue, contributes to a social disconnect where people prefer to talk to their mobile phones rather than to real people. So these are just some of the concerns, and I'm sure you are aware of many of them, maybe not all of them. But this is, let's say, what is at the center of also our concern that we have addressed in digital humanism. Let me give another picture here that is, of course, a simplification, but it tries to put the phenomena that I've just mentioned into a logical chain. So we live in a world where a lot of the digital hardware that we are using is becoming cheaper and cheaper, that's perhaps not true for your current latest smartphone, but certainly the number of sensors that is available in the world, the number of objects and systems connected to the internet is growing at a massive speed. It's hardly any machine nowadays that is produced that doesn't have an interface. And this low-cost platforms are, of course, feeding into digital service platforms that can control these machines very often with online interactions and where they harvest data. And in many cases, data about people and they do this in order to predict people's behavior for digital marketing, which facilitates a certain degree of digital control and strengthens the digital power. So this is just a different way of looking at some of the phenomena that I've previously mentioned. So it's a dynamic chain of digital processes that creates these phenomena of surveillance, prediction and also of control. Now, digital humanism is born from this abyss between on the one side, the promises of digital technologies and on the other side, an infocratic dystopia, if you want to see. So the infocratic dystopia, obviously, is about surveillance. It's about foreign powers, powers that you don't want to have. Whereas the promises of digital technologies are, of course, access to information for everybody, leveling of opportunities for every citizen. Everybody has a say in the world and in society. And somehow between these two visions, there is digital humanism born from it that says, well, look, we have had so many positive effects of the digital technology, but we're also seeing so many issues we need to address them. And one way of addressing them is by making, by asking ourselves, really, what do we want? What do we want from the future? And putting the human in the center. So while acknowledging that digitization has led to improvements in efficiency and in convenience, also saving resources from energy to all other sorts of resources, it can accelerate revolutions, it can uncover conspiracies, it can liberate people. But there are these terrifying developments which we, first of all, need to acknowledge. So there is people who argue and I think digital humanists would be some of them that this technology can potentially endanger our European fundamental values of basic and human rights and democracy. It can strengthen undesirable powers of control from control over discourse to platform powers beyond public control. It can lead to increasing alienation of people, for example, from labor. It introduces new brittle less and unsustainability of systems and services that if just something very little goes wrong, are no longer available. And it may also actually increase challenges of Europe in a now globally connected economic and geopolitical world. But that's also maybe true of other regions of the world. I just choose to take a European perspective here. So in response to that, and if I had to characterize digital humanism very briefly, I would argue, well, digital humanism is a positive, constructive initiative that puts humans and society at its center. It argues that digital technology must be designed to empower people and advance our democratic societies rather than limiting them. It emphasizes the need for technologies to, while embracing innovation and progress, to also further our already made social achievements, such as human rights, such as democracies, but also other social rights that we are particularly proud of in many European countries. And thirdly, and perhaps a bit surprisingly, if I talk to a group of engineers, that technology is not a destiny, but this is precisely the feeling that many people have, even academics debate this rather extensively. We are not helpless, is one of the messages of digital humanism. We can empower people and society, we can take a stance that also defines limits of technology. But in particular, it's simply the message that the technology we have today is really just at the beginning of its development. And first of all, we have much better technologies already available. And secondly, we should aim for and develop even better ones in the future. So digital humanism roots in European values of the enlightenment in understanding that some of these problems that we are addressing are maybe caused by our own shortcomings, but really we shouldn't forget about human rights, about democracy, about inclusion, and we should use these technologies to secure and to expand them also in our digital life. Now, the topic of clusters that digital humanism addresses is really the whole range of problems from automation and work, from participation in society, to of course AI ethics, to the question of surveillance and privacy, to how to regulate the large platforms, to fake news, but also to freedom of speech, education, digital politics and geopolitics, and how to make our systems more resilient. And if I had to point out one weakness of digital humanism, then I would say this is it. It's the effort to address all of those questions, but on the other hand, as I will try to show, it's also very difficult to separate these issues from one another because they actually have certain characteristics in which they are linked and why it's probably necessary to discuss them jointly. So what are these European values that digital humanism is emphasizing? So obviously it's a very European development of enlightenment and human rights, which have proven very successful globally, but which are probably also the most important recent achievements in Europe and are the basis for property, for justice, for health, for also our technical achievements. And so we could argue with some consideration that these should also form the basis for our future, freedom, for our future democracy, and for expanding human rights access for everybody. But they are of course under threat. So democracy has come under threat through influencing online discourse by a few selected organizations. There is a big debate about freedom of expression online, about non-discriminatory access to the technology itself, about surveillance, about data protection, and also about sustainability of these systems. But these are the values that we are talking about and where we believe that digital humanism should be a guiding force. Now, how could you turn this into a guiding force? I think one way of approaching it would be asking, in your system, design a set of questions. So for example, are you using the best available technologies in your system for protecting people's privacy? Very often we have to explain to people that compared to what is available technologically, what we are actually using is rather poor. And then, of course, there is room for further improving technologies to protect people's privacy. Is this a system that really leaves human dignity unaffected? Or is it by the way that the system takes a stance, takes an image, takes a snapshot of a person? Is it affecting human dignity in a negative sense? Is this IT lending a personal voice to the user? Does it increase knowledge and participation and inclusion in society? Is it a safe system? Is it robust? Is this AI working in partnership with humans? Or is it trying to solve all the problems on its own? Is this system designed to strengthen our understanding of the community, of a social contract? Is this furthering democratization? Is it increasing transparency in our political system? Did we involve varied stakeholders? And finally, is this resource efficient? This is a sustainable system. So these are questions that, from the point of digital humanisms, need to be asked critically about systems. And the reason why we think it's important to ask these questions is because there may be some features of the digital technology that you could perhaps characterize as inherently bad. I wouldn't go so far, but at least there is a danger that some characteristics of digital technology are actually inherently disempowering. So one of course is this tendency of the technology to offer surveyors. Not that I'm saying that it's necessarily like that, but I think we all know what the options are, what the possibilities are, what the powers of those are designing the systems to roll out a surveillance that is basically historically unprecedented. There are other constraints. Every system design puts constraints on the user. And by the very fact that we are constraining the user, there is a limitation emerging from that that potentially is also disempowering. And I will give an example. There is this tendency towards exhibitionism online as the philosopher Jung-Chul Han argues. There are networks, network effects which strengthen monopolies in online platforms. And there is this rather new effect which is the flip side of the shared economy that we are serviced, a trend towards services, a trend towards dematerialization of things. So all of this comes with an added rather innocent way in which the systems intrude our way of living, our lives. They look rather innocent and sometimes they even give the illusion of empowerment. And the question then is what do we really want and what do we mean when we talk about these, a high level concert such as human rights? So let me, and I try to give some because I know this group is interested in interfaces. So I make some people try to come up with some interface examples. I start with the consent illusion that many people will know. So it has become a widely accepted practice in system design that to use dark patterns online where we are psychologically, in my opinion, manipulating people so as to give the answer that is desired from the owner of the systems or the one who paid for the system's perspective but which give an illusion of choice. That many people, of course, are, you know, find first of all rather intruding and unnecessary but secondly, they just click yes without ever thinking about what it means to the extent that, you know, people have tried out that when you agree to some online rules, you agree to granting a non-transferrable option to let your kid be named by that company or if you download an e-book from the website, you agree to grant a non-transferrable option to not eat pizza for the next 12 months and so on or swipe the floor or whatever if you use the Wi-Fi. So what I'm saying here is the rules, not only follow dark patterns but what it means to agree has become such a weird concept because no one can possibly read all these terms and conditions anymore so that we have to ask ourselves what do we actually want from this world? Do we really, you know, want to insist on a world where we have chains of rules and agreements built on top of each other that make no sense practically and I think the answer is clearly no. I think the answer we have to come up with better ways of doing this. We have to improve our ways of interacting with systems. Second example would be the illusion of choice where there are significant constraints and again, I'm using a simple interface example here. Some simple design decisions have durable impact on the rights of people, on what they can do and what you can do with that system as a next steps. The interface is more than the face. It's a point of empowerment and disempowerment and the simplest example is of course ticking this box which reduces the world into a category of two classes that really may look much more difficult and much more complicated. But then again, even if we often like Facebook does today a plethora of options for your gender, we have to ask who is really empowered here? Are we empowering the user for giving them multiple gender options? Or are we empowering Facebook for even better facilitating personalization and targeting advertising? So these are some of the questions that we talk about. In more principle view at the interface, the question arises with whom are we interfacing really in computing? Much of computer technology focuses on the individual based on the highly individualized nature of this interaction. Actually, that's true for a lot of technology but it's especially true of computer technology. There is me the human and there's you the machine, me the customer and you the service. There is no, as again, Jung-Chul Han would say, there is no other as in no counter, no counterpart. Keimgegen Über is the word that he uses in German. There is no Geigen, no counter that opposes me. Actually the idea of the system is very much that everything is fluid and seamless. And we don't have to deal with the intricacies and the complexities of other people that we need when we're not using or not talking to them through a computer. But as a side effect of this, the community is abstracted away at the very interface. There is no community. There is always just the system in a very basic and very fundamental way. And so then the question emerges, how do we get the community aspects back in? How do we get anything social back into computing? And this goes on of course to, for example, personalization. I mean, personalization has become occasionally very creepy based on data harvested. We get all sorts of recommendations. We get all sorts of individualizations that seem based on information that seems to have a model of ourselves which we sometimes agree with, sometimes do not agree with. But really the question is, who is building those models? Who is optimizing with what respect, what these models are? Shouldn't a personal model be something personal? Shouldn't it be my model of myself? Should it not be my very right to know that model and to decide when to use it and for which purposes? Do we not have to think about how we can build such models and make them much more personal than it is today? And I think in digital humanism, we would say, yes, we need this, but this is research. And maybe a final example here is of course online discourse. You know that the power to lead the discussion is not just the power to select, it's also the power to show certain comments to certain people, it's also the power to delete. Some of you will know that, and I know that from your program, some of you are actually working in this business, this question of which online comments do we delete? What do we detect? How do we assess what's going on online is very tricky. It is a very hard problem. It's only superficially simple. We can do it maybe in one language very well, but we cannot do it in, for example, Rohingya. And this may lead to killings as it did in the Rohingya scandal when Facebook was not able to delete certain comments in Rohingya that would have been deleted in English. But then the question arises from a digital humanism perspective, is that really what we want? Do we want to treat people differently just because our systems work better in one language than in another language? And a much more deeper question arises, what do we even mean? Many people nowadays talk about harmful content. So one thing would be illegal content where we have clear legal limits in most countries, but what's harmful content? Harmful seems to be a term that, well, is up for discussion. And it seems to suggest that there is something like a good discourse, like online, perhaps we all should speak factually and be friendly with each other and only use arguments and people light. But this is, even when I'm telling you this, it's even absurd. Our language and how we interact with another sometimes needs to be rather offensive, maybe. It needs to be strong. It needs to be new. It needs to be upsetting as in the 60s and 70s, the debates even in academia were. So here I'm using an example of Vina Actonismos where the guy that you see here was then imprisoned. Because he, well, you see what he did. But looking back at it, it was an important political movement. It was important to do that. So what do we want? What kind of online life is the good life? And I could go on with examples that really we have a lot of questions that sometimes we tend to answer rather technologies. So here's maybe a final one. Most of you will be aware of the challenges of AI systems to avoid bias when we use machine learning. And there is much talk about to set for such systems to be fair. But what do we need by fair? What kind of fairness are we talking about? Do we want to maximize total accuracy in an assessment? Do we want equal opportunities to be represented? Do we want predictive equality or equal odds or counterfactual fairness? All of these have been debated not just in computer science. They have been proposed in philosophy and all of which have different mathematical answers about what fairness is, what a fair data set is, what an unbiased data set might be. And these discussions are not just engineering discussions then neither are they just philosophical discussions. In the end, they are political debates about how do we want to live? How do we want these systems to be? But it's a discussion that is unavoidable. We need this discussion and that's one of the arguments of digital humanism. And I could mention here, one big challenge that I see coming ahead of us and that has been only where we barely scratch the surface. It's the shift from things to software. It's the shift that most of you will be aware of from rather than buying records, you are downloading music, which is nice because we don't need resources for doing that. But on the other hand, are we taking away people's rights of what they can do with it? Can you still exhibit your records in your living room? Maybe not. Can you still collect and give your collection to someone else? Well, it's very difficult. Can you destroy it? No, and this is because for thousands of years, we have developed certain things that you can do with things, but they are not represented in what you can do with services and software. Again, a big debate that we need to have, we need to discuss and digital humanism is the proposal to have this debate and the invitation to talk about this. So it's not just about AI, it's not just about decision making, it's not just about surveillance and privacy because these things are linked with each other. It is the digitization on the hardware side that's driving the surveillance side which is again a basic root of the power of big platforms and so on and so forth. So we need to debate these things together and cannot isolate them easily. I skipped the geopolitics, this is just an image of Jacinda Ardern from New Zealand and Emmanuel Macron deciding to work together in order to be even borderline able to regulate what's going on in online discourse. So all of this has led a group of academics in Vienna a couple of years ago to declare the Vienna Manifesto on Digital Humanism. I'm not going to read this to you. It's a set of principles that suggests we need a new way of dealing with these questions that goes far beyond just technology or just philosophy or just politics. It's really bringing all of this together which includes research challenges and that is why it's a positive view of digital technologies. It's not one that says digital technologies are inherently bad and we cannot solve these problems. No, we may occasionally decide to not to use a certain type of technology as some countries did with nuclear energy, for example, but there may be other ones in digital technology as well. But by and large, we feel we can be much better than what we are doing today. But this requires the interaction of computer science, of law, of philosophy, of social science, of economies, of economics, then also of course of training and educating people and maybe also artists. We need to bring together these people to first of all understand what do we really want and then realizing it, including by the use of digital systems. So I think we have in Europe some initiatives that exist. We have a huge interest from computer science but also from other groups and maybe that's one important characteristic of digital humanism that it comes from within computer science. It's very much driven by informatics people. It's not driven by science and technology starters. It's not driven by philosophy of technology. It is computer scientists who brought this up and this is why we feel there is a real wind of change. Technologies can be part of the answer but they may not be everything but really what we need is we need a better understanding of what do we really want. What does it mean to live the good life digitally? That is the underlying question and that's what we are trying to do with it. If you want to learn more about it there's a book we've recently edited. It's an edited collection, Perspectives on Digital Humanism. It's completely available online. You don't need to buy it but if you buy it makes us all so happy. Thank you and I'm ready for some questions.