 Hello and good afternoon everybody. My name is Jan Gerlach. I am a lead public policy manager at the Wikimedia Foundation, which is the nonprofit that hosts Wikipedia and we are an affiliated participant in the network of centers. The network of centers has built the tradition of meeting at the annual Internet Governance Forum. Unfortunately, this year we cannot be all in one place, so we're meeting our friends and allies remotely. I'm today delighted to be joined here by Wolf Lo from the International Center for Ethics in Science and Humanities at the University of Tübingen. And Wolf will talk to me a little bit about what he does at his center, where they are based, what upcoming projects they have, and how other centers in the network of centers can participate. Hello Wolf. Thank you so much for joining me today. Hi Jan. I'm really happy to meet you. Thanks a lot for having me. And thanks a lot that we can be now part at the network. That's a very recent, as of last week, I think. Congratulations. Yeah. And yeah, my name is Wolf Lo. I'm currently a postdoc at EZV, the International Center for Ethics in the Sciences and Humanities. We generally, I myself am mostly in doing research in ethics regarding AI, regarding media, social media, democracy, public sphere, but also social robotics, these kind of things. So philosophy of technology, ethics of technology in the broad sense. That sounds like a lot, but maybe we can come to this, that there are a lot of connections in these things, not least of them privacy, which is one of my main focus. I'm trained in philosophy. And I've been working at the Center for a little bit more than two years now. Yeah, in the center, it's, it's interdisciplinary research center at the University of Tübingen in Germany. Very nice, small student town, a little bit south of Stuttgart for the people that have some geography ideas. And we are generally interdisciplinary, we exist for now 25 years, I think we started off a lot with biomedical ethics, genetic engineering, these kind of things in the 90s, but have evolved ever since. We combine a lot of researchers from philosophy, social science, especially political science, environmental studies, but also biology, theology. So there's, we're diverse bunch in this respect. The EZV itself has about 60 staff members right now, we do about 30 third party funded projects right at the moment. So we're mainly third party funded a lot of German Ministry of Education and Science things but also other ministries EU Horizon 2020. The whole thing. The research group that the working group that I am part of that deals mainly with media ethics and information technology. There we combine media ethics media philosophy with philosophy of technology technology ethics and also a lot of STS science technology and because we think that that that ethical evaluation of technology or applied ethics always needs empirical research and always needs this empirical side. And we do a lot of technology development evaluation, where we are sort of like the ethical partners in some projects on technology development project. Yeah, maybe broad scope. Very interesting. Yeah. And can you just tell me a little bit about how old is your group. You mentioned that it sits in a larger center that that looks at all the ethical questions of technology as well. In a broader sense also bioethics you mentioned I think and medical ethics. How old is your group. Well, I mean so that's also an evolving thing but we started off in the early 2000s, I would say 2004 2005 and we did in that was way before my time. And what we did in the beginning a lot of security ethics and still are also especially with digitalization and digital technology. We did a project on body scanners. We did a project on smart surveillance and security. So these kind of things we still we are still doing we're working a lot with German first responders, German Red Cross, but also police. And we're doing right now we we doing projects on fake news and these kind of things so that's, that was maybe I think the beginning but then, especially with the AI. One could probably say hi. And, and then also the hype in robotics, especially social robotics. We naturally turned also do to these kind of things. And so for example just to give you an idea what we, what are the kind of things that we're currently doing we're in this, in this realm of social media democracy public sphere we for example, a part of a project that's called we net. That's an EU horizon 2020 project where the idea is to develop an online platform that aims at diversity where social media interaction that means that other than Facebook or Twitter and and and what's that you're not going into these filter channels but breaking out of them. That's sort of like the main idea behind this project so this is one of those. We're doing a lot of privacy on privacy we have what is called forum privatize forum privacy with since 2014, where we did a lot of different takes on privacy in the digital age right now we last year we did something on privacy and and for the next for the next iteration of this of this project we're focusing on on privacy in the course of life. So like from young age to old age, especially the focus is on privacy and vulnerable groups. That's something that we would be also very interested to collaborate with other centers in the network, because we think that there are a lot of issues, and especially since right now our focus is more like in Germany there's also like, there's some other partners involved that do like German legal privacy data protection stuff, but we're very interested here to broaden this and invite people that that are interested in collaborating with us for example on these aspects. Is there a pandemic, a pandemic angle to this project for instance, when you talk about vulnerable groups I think nowadays whenever we talk about vulnerable groups or hear about this and at least I when I hear about this, it somehow has shifted from how we talk about this right this is a term that appears much more in I think, everyday life now and and and often what what we mean is vulnerable groups who are vulnerable to coven 19 right. Yeah, so is there is there an angle to that as well. I mean so for the for the next iteration, not in particular. I mean so it's not not focused on this. But of course, that's something that that we also think about a lot and have been involved in some aspects and things for example we started a blog right in the beginning of the coven 19 that which is unfortunately in German so it's, but oh it isn't really. Some of the, some of the blog posts are in German some of them are English from people from the center, their own take on the coven 19 crisis and related aspects. I think I had two posts in that for example, one on on triage and racial discrimination. But that's like, you know, for this for this privacy for privacy focus that's not a that's not really a focus for us there right now. Yeah, for example something else that you know I can talk about because it's something that I've been involved or I am involved. I've been in a project where the idea was developed like social companion robots for elderly. I looked especially at the privacy aspects and it's very interesting to see that a lot of robotics still and and robot ethics still is not about privacy, which I find. I don't know. I'm always surprised because I mean so there is some talk about this but the main focus of people that do research and privacy, especially in the philosophical. In the philosophical sense is more on the internet or on on smartphones and people that do robot ethics are typically not interested in privacy. And if you have like a robot in your house that, you know, has like six seven sensors that are on 24 seven. And we're trying to develop I wrote a couple of papers that hopefully will come out anytime on how to, you know, design interaction with elderly people on how to, you know, make give them the even the possibility to have something like informed you're not even talking about people that are that have, you know, some cognitive issues we're talking about people that that are not tech technology savvy that maybe have a problem with hearing or speaking. And you don't have to have like big and then, you know, just like with Alexa you turn it on and then everything goes on and, you know, it from then on it'll, you know, it'll talk talk home to whoever built built this thing right. So this sounds very intriguing to me why do you think there is this lack of or like what do you identify as a lack of attention to privacy issues is it because robots, sort of like the dark side of like them taking over is more is more sort of attention like grabbing or or why is that. I think there is. I mean that has been a very. Yeah, especially in philosophy. And general ethics of technology has been like a very intriguing subject or field of inquiry. And it's interesting also also the whole talk about, you know, autonomous cars and I have been guilty of writing papers on autonomous cars. But I think, after a while, or at least that's for us that's the interesting part is to see like how this, you know, the technology play out in practice right now, and you know people are designing those things right now. There is a big, a big literature on, you know, anthropomorphizing and emotionalizing social robots. For the last, yeah, five to 10 years, I would say. And I think that's a big issue still. But with, but the connection and there's also almost always the connection made also the privacy but nothing comes of it. That's, that's what I what I mean so people say okay behind this seemingly seamless and smooth and direction and all these kind of things you can have, you can hide all these privacy issues and all these datafication that you that is going on and in this sense you have different kinds of dark patterns or then maybe in the internet but you know they work in the same or in a very similar way. I think that's the point of that but then most people don't say so so how do you change that well how can you make this better and and that's maybe also something like it's a little bit the nitty gritty and then you need the designers and the engineers and you know and then in the end it's not fancy and or not that fancy and then maybe people are not interested. I'm not saying there's that there isn't any, any talk of this. I'm just surprised at how little still there is given the fact that, you know, I mean so the smartphone is probably the device with the, with the most sensors and data tracking so I don't know how far, but, you know, look at some smart home applications or some social robots I mean so the typical robot has, as I said, six to seven sensors that just in order for it to, you know, orientate within 3D and not bump into things have to be constantly recording basically. As a result of that I imagine somebody out there has a pretty precise map of your apartment right. Yeah, as I have like a vacuum cleaner robot. Yes, probably, but funny interesting thing there. When I bought it. Not too long ago. It's like, it's a Chinese model, and it comes with a Chinese app and the app is like, you know, like my Android tells me so the app wants access to your camera and your contacts and your, I don't know, all your memory cards and all these things and I'm like, you're a vacuum cleaner, what do you need all these things and I'm like always denied denied it. And of course the app works, but you know, still you can you can try right it's just a very social vacuum cleaner. Yeah, yeah. So, it'll talk to all my friends. Yeah, I think historically the network of centers has been pretty good at shining light on issues that may have been sort of underappreciated or that needs sort of more inquiry and and also sort of putting it in sort of like at center stage a little bit of of of research, also collaborating on those things. Do you think this is something where there's there's room for for the network of centers to maybe assist you help you or for you to to collaborate with others. Definitely. So I mean, that is something that I personally like to explore more and see more especially also people from interdisciplinary research. We have traditionally a strong focus on on ethics and and social sciences. We have a lot of social implications of technology, but the legal side we basically and a lot of the centers do have a strong legal leaning, which, which would be a perfect match because I think that is something that we are basically missing we have legal experts in our center, but there are also like two things maybe that I could quickly mention before that the 20 minutes are already over that we think that right now in the near future we can, we have a high interest in collaborating and one thing is, we are for a couple of years now, more and more into AI ethics, moved into AI ethics. We have close connection in collaboration with the tubing and excellence cluster machine learning. We have, and we have this past year in March, I think we came out with a sort of like a policy white paper that's called with a group of researchers in Germany that's called from principles to practice. And where we thought about how to, how to implement all these AI ethical principles. And they're like, so maybe you, you know the high level expert group and Floridi and these, you know, big principles of transparency and explainability and what they are, and I think they're now over 200 papers or policy guidelines that mentioned one or some other of these principles. And with some people that are experts in regulation and standardization and also some philosophers and computer science people and we try to figure out on a, let's say, middle level how to implement those and then we broke it down a little bit and from the time we came to criteria and some observables and indicators so what does actually have to happen in order for a system to be transparent and is there something that I can measure this with. And so, we realized that with every, every opera opera serialization. Sorry about that. That comes. Come should have chosen a different one. Come certain dangers, of course, because you're, you're missing out on on on context. But I think in order to, you know, maybe get everybody on the same page on what are we actually talking about when we're talking about transparency or non discrimination, and also to give regulators but also the public something that they can, you know, say okay but you didn't do this, and you know, this is something that you should be doing. Yeah, and we would be interested we know of in Germany of two or three other projects that work in this kind of direction but we'd be very interested to to exchange ideas and collaborate and see how there's there's tons of open questions. And we're right now working with some ministries and trying to German federal ministries and trying to continue to work so that this first policy paper is only basically a proof of concept. And yeah, we'd be very happy to collaborate on these kind of things. Putting a pin in there for other centers. So that would be something. Yes, that would be something there's also another thing I don't know if we still have time. Of course, of course. So we, we very recently started reaching out to some people at the university Sao Paulo. And we're trying to figure out how to, because we have some, we had some projects in the past on digitalization in the global south. And we know that effects. For example, we had a project on the ethical implications of it export to sub Saharan African countries. But we think it's from our perspective it's very hard to get funding for these kind of things in Germany. So, by chance we we we made a connection with the University of Sao Paulo and we're thinking about a project this is very and very early stages. So we have a project on the digitalization of education in the global south. For example, one question could be how can guidelines and regulation efforts effect the global south beyond the EU or something but those regulations are originate basically and how can we translate them. Is there a way to measure this. Does that even make sense is this not just a, you know, new colonialization kind of thing do they need their own regulations. So think about how, how regulations in the EU or in the US would affect education in the south. Digitalization and and privacy rights and all these kind of things in the in the global south. And we think that education could be like a good first idea to start with the look at. That's not, that's not, you know, that's not a given but we think this might be something that's interesting to look at and to see. And we be, especially since the center is so cool is so global, we'd be, we'd be more than happy to to see what people come up with and collaborate, because this is, you know, something that's always in the back of our minds and every once in a while there's a there's a project but you know we we're actually very interested in in building up this building up this connection and and and make some make some collaborations work in this field. It's not like you've come to the right place. The center is definitely the right place to to find collaborators are all around the world. And it has grown a lot in recent years, especially in places outside the US NTU. So, lots of opportunities there. Well, well, thank you so much for chatting with me today. It was a pleasure meeting you and hearing about your center and I'm excited to see what's going to come from this as well. Thank you so much. Thank you for taking the time. And, yeah, we're we're happy to be on board and see see where that takes us. Welcome again.