 We do have time for some questions. Let me just check the chat real quick. I see, hi Natasha. I see your question. I wanted to ask about the expectations of internet users, sorry, expectations of internet users, the public visibility of online research projects and disconnect between users' expectations and the reality of online data collection and use. Do the project team see it as a part of their ethical responsibility to try and educate internet users about what is happening to their data? And how might this be achieved when consent is not being sought without alienating users, as Nick just said? So Jan, do you want to start with that and then we'll move across? Yes, I'm thinking about this question. I was a little bit, again, distracted with my video, so I hope that you can hear me. Yes, I was thinking, of course, about internet users' expectations. I was also thinking about their expectations from a little bit of a different point of view, because for a while I was very much focused on ethical aspects of using of electronic health records. And there is already a lot of research, especially focus groups with patients who are asked about their attitudes to using their electronic health records in research. And usually people at the beginning of the conversation are not feeling very comfortable with research use of their health records. But when they are informed about possible health benefits and about all privacy protections, then there became more and more willingness to share the electronic health data, especially if they have a certain condition. So I was always very curious about, let's say, if we are asking a certain questions, just, and we are using certain instruments to measure users' attitudes, sometimes we will not, let's say, help users to really evaluate their answer. So of course, I take this into consideration and it's a real concern that people are really reluctant about sharing their data. And they treat research with a huge suspicion. And yes, and what we, of course, what we want to do, we want to be as much as possible, as transparent as it is possible. And also, this kind of seminar is our effort to make our research project visible. And we will be, of course, informing and explaining. And here I see that there is also a part of question about consent that, let's say, it's impossible because we have to contact hundreds of thousands of users. And that will be impossible to physically process. We don't have time and resources to really provide this information. And mostly when people are asked to participate in this kind of research, they can even consider, for instance, asking them as a form of intrusion. Then they don't have any obligation to respond to our invitation to even read the informed consent. So from the organizational point of view, it is impossible to really conduct research with machine learning and asking for informed consent. I, you know, Mikhail, maybe you can also add something about this. Oh, not really. I think this covers basically your task. I really, first of all, I don't think we are touching upon such a private and directly vulnerable feature of people. Of course, their vulnerability to fake news is a vulnerability, but it's not a direct vulnerability that can be very easily used by an adversary. Besides, we already had the discussions about how to present the results of the project. For instance, we both, with Yana, agreed that probably publishing all the models that we will be developing is not a good idea. That maybe we will just publish a surrogate models, so models of models instead of detailed ordinary models that would allow you to score a given individual against the vulnerability. So yes, this is an ongoing and fascinating. And for me especially, a fascinating discussion, because this is the first time in my life that I'm having those discussions. And it's not something that in the technical and ICT communities is at least not to a recent, not until recently, this conversation has not been going on very much. And now it is, yeah, we're catching up with the rest of the civilized worlds regarding the ethics. So hopefully we'll catch you guys. Yeah, just to respond back to Natasha's question too, in the US, in the regulatory framework, we actually have a word that we use that getting consent from that many individuals would be impracticable. And so it sets it outside of the parameters of a typical informed consent process. Okay, there's another question. I can read it during the pandemic. Many researchers performed ad hoc analyses of Twitter without any ethical consideration who just entered this area of research recently. Could you talk something about this phenomena to prepare a study protocol? You need a lot of time. So especially in the beginning of the pandemic, some investigations could violate some standards. Does anybody have any specific examples of that happening? I don't know any specific example that I can discuss. But let's say usually when we talk about normal biomedical research during this kind of emergency situations, usually the review protocol is performed in a very, let's say, expedient way. So it is shortened and it is usually not a full review. Sometimes even this kind of protocols are reviewed in advance. So they just wait for the pandemic to be launched and reviewed. And I have to admit and this is also why we are organizing this seminar that online research on Twitter is also very new for me. So I'm also learning what are the specific ethical standards for Twitter research. And because as it was mentioned before, from the regulatory point of view, researchers do not violate any specific regulations. And even from the perspective of bio-researchers, all these restrictions that could be imposed by additional reviews or consent forms or something like that could even seem to be a little bit excessive because right now there is a discussion within, let's say, bioethical community, how to facilitate ethical reviews, how to loosen ethical restriction and allow researchers for self-regulation. And as Elizabeth mentioned, common rule was quite recently upgraded and revised. And the expectations from the biomedical researchers were that even that this self-regulating aspect will be taken into consideration even to the greater extent that it was. Because right now in bioethics and in biomedical research ethics, we discuss rather, and this is a phrase taken directly from the article written by Tom Beechamp, that over-protection, over-regulation leads, especially in this kind of studies of emergency pandemic studies to under-protection. So over-regulation and over-protection leads actually to under-protection. But I'm not sure if we can use exactly the same logic to internet research. And I think that generally internet researchers are in a very, right now, in a very nice position because I don't think that there is any pressure from the regulatory point of view to really tighten the regulation. But they have, I think, that they have an opportunity to self-regulate themselves and to set this ethical standard by themselves, also in order to build trust with participants, with user, with those who produce data. I don't know what is your opinion about that. And now I'm asking about Nina Nikolas and Elizabeth. Well, I think that, yeah, I was going to try to tie this question to the previous question a little bit, too, in terms of thinking about ways that we can try to enhance public knowledge as a way to sort of mitigate some of these issues. One way is certainly by thinking about, if not getting informed consent up front, certainly informing users still after the fact. And so doing things like sharing research outputs with our participants is a really good way of trying to let them know that this is happening, to let them see the outputs of this work and to try to build that sort of that trust with that participant community. I mean, I think certainly in terms of questions about there's an event and you have to respond to it quickly. And it's really important to maybe start the data gathering before you've necessarily done the full compliance side of the ethics portions. I mean, I think that, obviously, if there's a real experience to data collection and there's sort of an immediate intangible impact and severity of it, I think that it's real important to start this process and be in coordination with the IRB or the ethics review board or whatever it is as soon as humanly possible and let them know. I mean, there are certainly ways and IRBs have encountered this before of where you started the data collection and you're contacting them to get approval, starting data collection, I should say, public data and you're contacting them in parallel to get the process rolling. I think that's really important to make sure that if there is an experience to the data collection that it happens in tandem with the compliance side of things. And I want to just have a different take on the question and this conversation in that at what better time is there than right now to be doing research, right? I mean, think about the discourse right now around every day we're hearing about clinical trials, we're hearing about phase one trials, phase two when we're talking about vaccines and the development. And so it seems to me that this year has almost been a crash course in public health education for the whole world and it shows both some of the great things about our public health systems and of course it's shown where things are really, really terrible for many, many communities, many individuals where the public health systems have truly fallen. And so I think as researchers it's almost like this is our heyday, right? We have an opportunity to be talking about our ethics, our research. I mean, again, I can't think of a better time and either in the context of internet researcher or outside of it, I do think research has become very common. We're all talking about it right now. And so perhaps it's the perfect time in the perfect storm. We take opportunity, right? If we have about 10 minutes or so left, I had one question that I wanted to hear from all three of you. And it got me thinking, Nick, when you showed the data about people's comfort levels when their tweets were analyzed by a computer system analytics versus by a human, that they were more comfortable with that, right? But then I go back to Mikolaj and what you showed us, right? Where the machine learning was oftentimes so wrong, okay? So there's those two pieces, but then I also want to tie it back to you, where, okay, if we as researchers have some kind of ethical responsibility to intervene, perhaps in the case of depression or in the case of mental illness or in any case, in the case of any vulnerability, how do we tie all those three pieces together then? I don't have the answer. And as soon as I saw that data, it was like, uh-oh, where do we go with this? Yeah. Well, so I think that's a good grant right there. I don't have an answer to the question. I mean, I think that it's important to understand why people feel more comfortable with having a machine analyze their content rather than a human. And part of it is because of the fear of human judgment. So one of the things we did in our survey is that we've left the opportunity available for people to include additional comments about the questions. And one of the things that we got was very active mistrust of researchers, the belief that researchers were politically biased or that researchers are only out there for their own gain. And so there was this idea that certainly seems to be tacit in the data that despite people who might have the STS background, who might have the knowledge that algorithms can be biased, that there is a more public belief that these things are neutral. And that's something that we're going to have to sort of think through and deal with. Yeah. And it's funny, again, to go to Mika, when I hear you say human judgment, I think of implicit bias that are then embedded in our systems and in our tools. Yeah. I think there are two opposing forces here at play. The one that Nick mentioned, the fear of human judgment. And the other one, which is basically giving or assigning agency to machines where there is no agency at all. For instance, if you give messenger the access to your microphone, the messenger will eavesdrop on your conversation. So if on the phone you mention a certain brand, say, I'm thinking about buying this and that, you're more likely than not to see the advert for that particular brand in your Facebook feeds two days in the future. But people will think someone is listening to my conversations, that they have heard the brand. So that's why I'm saying this. Of course, nobody listens to that, right? Even the American, even the Secret Service, which eavesdrops on every single conversation in the world. They also have just certain words or combination of words that they listen to. But people somehow mix those two modalities. The idea of a machine which is impartial, which does not judge, which is something goes in and there is some rumbling and the number comes out. Basically, that's the idea, the public idea of a machine analyzing the data versus the researcher looking at the data and somehow judging me for my character, for what I've said, what I'm searching for. Before my death, please delete my browsing history. This is the most important thing you should ask. You should write down in your will, right? So nobody sees that. So yeah, people are just very, very confused about what is being done to their data, who is doing this, what incentives there are at play, where the money is, and what their input is. I would say I'd rather approach it from the economical point of view, trying to educate people about the economical play behind all of that, because it's not nefarious. There are huge companies that just want to sell you more of their crap. That's what it is. So people should understand that and should understand that if they're not paying their money, they're paying with their clicks, views, eyeballs, and minutes of attention, which is the greatest price to pay, given the finite amount of time that we have here on earth. So I would rather go into economy and trying to educate people on that level, rather than trying to open the research and saying, yeah, please understand, this is a madness, and this is how we do it, and that's why we do it. Might be, I don't know if it's a better way, but I have a hunch that it would be more efficient. Yes, I was also thinking about the reason why people are so suspicious about researchers, and I agree that they don't want to be, that they don't want to be judged, but when we are talking about also these obligations of researchers to act, we are somehow coming back to this double role of, for instance, a researcher who is also a physician, and in the context of internet research, it does not happen. So researchers is just a researcher and the motive of research is quite mysterious for people. Why social researchers do research? Why computer researchers do research? We know why doctors perform research because, and we want them to conduct as many research trials and observations as possible, because we have hope also in medicine, and that's why people also may think that they would like to contribute to biomedical research, but all this sphere of computer research and social research may seem to be very suspicious, and somehow it conflates with this commercial aspect and that Miko, I describe, that the whole, let's say, internet industry want to play our desires and want to make, and want to press as much money as possible from our pocket. So on the other hand, I agree with Miko in this respect that education should be also about the business side, but I still think that this idea that also researchers share their intentions, and also I like very much this idea to document ethical deliberation and ethical process, because I believe that sometimes even when we make mistakes, but we make certain mistakes in a good faith, this somehow may be not excuse us totally, but at least shows that we that we try our best to understand the problem, and at least that we had a good good intentions. So this is probably not a very utilitarian approach, but I think that still it is important, because also the procedure that we take in order to come to a certain conclusion seems to be important from the ethical point of view. You were muted. You would think after all these months, right? We're just about at time. All of our emails are available on the project webpage. If any of the questions didn't get answered, if you have further follow-up, feel free to reach out to any of us, and I'll turn it back to you, Jan. Thank you. Now you're muted. Excuse me. The seminar came to its end, so I would like to thank first of all our participants. Thank you for all your questions and comments and for being with us. Then, of course, I would like to thank our great guests, Elizabeth and Nicolas, and of course, Miko, and I also want to thank Agnieszka Lempart, our administrative manager. She did a fantastic job organizing and advertising this event, and without her, this event simply wouldn't happen. So thank you all. Thank you and goodbye.