 We should be live with that now. Let me give it a second and wait just a moment to get up that everybody has been pulled into the session here. It looks like so. Yes, I think we're good to go. Welcome back after a after a brief break. Happy to introduce our next speaker. Good friend of mine Hugh Desmond, a fantastic scholar who's worked on all kinds of stuff. And today we'll be talking to us from the moment is he's connected with the the HP ST in Paris as well as the University of Antwerp and today he'll be talking to us about academic status in a digital age invisible barriers to open science and just a brief sort of pre plug. If you're interested in open science questions, come back tomorrow as well. Make sure to be here. There's a yet more yet more open science discussion going on tomorrow. This is a topic that I think is I'm really excited to have discussed at the conference. So without further ado, Hugh, take it away. All right. Hello everybody. So I'm, I'm speaking into a zoom screen so but let's just trust that everything is there and everybody's listening on. So I'm going to be talking about academic status today with regards to open science. Maybe just a first remark about, okay, just to be on the same page what is open science because there are a lot of definitions and understandings out there. Some approaches focus on types of practices that define open science in terms of practices like open access or data sharing sharing data sets protocol pre registration and so on. A broader approach defines it in terms of value. So the value of transparency is obviously number one there, but also equality and participation. So in the sense, you know, you have open science in spirit and open science and practice. I'll be focusing predominantly on the value of transparency and offer some critical reflections about the value of transparency. Because one question that strikes me when observing the open science literature debate discussion is that it's not really clear to me what are the downsides to open science. And so one parallel here inspiration is kind of what's been, you know, the rise and fall a little bit in public domain of social media. So, you know, not even 10 years ago, it's crazy when you think about it, but not even 10 years ago, social media was really seen as unambiguously positive for democracy, for instance. Think of the Arab Spring or the Euro Maiden Revolution in Ukraine. Social media was this new thing that was going to allow sharing and allowing new communities to be formed and ultimately to spread democracy everywhere in the world. And fast forward to today. Of course, the narrative has radically changed. So the side effects of this transparency become well recognized. Recent book by Zuboff there about surveillance capitalism, so how our privacy is being commoditized by large corporations. Mental health problems. So that's a graph from a paper by John Twinger. And how social media use is positively correlated with incidents of depression, especially among teenage girls. And then of course, we're not even talking about challenges of fake news echo chambers and so on and the list only seems to be growing. So we're not going to go into that today, but just kind of as an illustration of how, you know, I need to think of these. These are one of the favorite graphs of stock market investors of how narratives can form and can come to dominate stock markets. And at the top, everybody's talking about a new paradigm. And that's, of course, just before the crash. So I'm not saying something exactly like this is going on with open science, but there is a lot of enthusiasm in some quarters. Here is a particular particularly clear example where just more scientific transparency is just unambiguously good. No, no bad side effects to be seen from an edited volume. So talking about the second scientific revolution. I'm just reading here. So picture a situation in which scientists would be able to publish all their thoughts, results, conclusions, data and such as they occur openly and widely available to everybody. Thanks to the internet. This is of course possible and consequence knowledge could flow quickly regardless of institutions and personal networks. Research results could be published as they occur. There will be no need to wait until results are complete enough to support a full paper. So it sounds like a fantastic thing. Of course, you know, you present these couple of sentences to a philosopher and, you know, just begin to parse the first sentence in a picture situation in which scientists will be able to publish all their thoughts all. Are you sure about that? All their thoughts and as they occur. No, no filtering. Okay, so there are some extreme examples out there. But they're also what I would regard as as sophisticated and very plausible examples of this thought and very nice recent paper by Ramco Heason and Liam Bright about defending the abolishment of pre-publication peer review. And they make a very, very strong case and will make most readers seriously doubt whether we really still need or even want pre-publication peer review. And some kind of ideas assumptions in their view is that, well, a citation count as a better long term measure of scientific worth. So that peer review is a short term measure, but it doesn't always predict well how an idea or piece of scientific work will be taken up by the community. And they have a brief discussion about prestige biases. Prestige bias will be central to this talk. But they, they say that there's no evidence that it wouldn't intensify inequality in science. And these are, are themes that are going to come back. This talk isn't going to, going to target their view is such going to present a more general picture. But this is an instance of how it's unclear really what downsides there are of this move towards transparency. And my basic idea focuses on the challenge of information overload, overload crucial for, you know, any decision making process, how to select information in a complex environment. And of course, gatekeeping is one way of doing that. If you get rid of gatekeeping, you know, we're still left with the information overload. So what are the other strategies, and then the era of open science, the two main ones are via social media social media recommendations and search algorithms. So be kind of analyzing these in more detail to show how the value of transparency is manifested in these strategies. I'm going to argue that social media and search algorithms are actually riddled with status biases. And that they actually amount to a new invisible barriers column invisible because they're there but they're not manifested in job titles or in the institution names and so on. Unless there are pockets of opaque decision making, they're not transparent, and they effectively exclude some scientists. Okay, yeah, so let's move on. Quick background. What is social status? Why are minds so focused towards social status, this kind of ranking of individuals within the community? And prestige as a form of status. So prestige is peculiar to human beings. And it's very connected to the importance of social learning for the human species. You know, why do we have gatekeepers in the first place? Why do we learn from others? Basic answer is that individual learning is really weak. So the whole science of cultural evolution is based on that one insight that individual learning is very weak. Social learning is extremely important. And that's so culture is one way of structuring social learning. So we learn best in structured environments. Some books there of Sterenian and Henry that developed these insights. And so status is something more general than just prestige itself. Status in general is a group level adaptation to minimize conflict about who gets what. So, you know, like pecking orders and chickens. And cultural evolutionary anthropologists often distinguish between two forms of status dominance, which has to do with force or a threat of force. And prestige, which is then indicative of a skill or a surface from which others would benefit. And so, you know, prestige, we often talk about that in a very negative way. But within a broader picture, if you look at the manifestations of status in the natural world, prestige is a pretty benevolent way of structuring status hierarchies. Because in namely in such a way that social learning is promoted. And so we pay a lot of attention to high prestige individuals. And that is, you know, very relevant for the academic environment, academic environment, which, you know, by first or any approximation also has a serious challenge of information overload. Just think of the number of articles published per year 2021. So we're about 3 million at the moment. But, you know, it's increasing at 3% each year. But, you know, 3%, that's still an exponential increase. And so that's an overload for any one researcher, obviously. Reading and even understanding scientific article requires a lot of effort. You know, we can assume that as a given. So that requires that creates a need for some system of signaling about what to invest time and energy in. And so obviously, there are very basic adaptations here. Like having a title is already one way of signaling the information content of a piece. But we also rely on social indicators. And just think about basic academic practices here from this perspective of status hierarchies and status biases. So an editor, how would an evolutionary anthropologist analyze the phenomenon of an editorship? Well, it's a service to the community, right? They're doing a lot of work for the community. But it's also a high prestige position. They have a lot of influence also, at least in principle. Maybe 50 or 60 years ago, it was more apparent. Very interesting article there by Christ Vassen and Joel Katziff on how changes of editorship that major journals like Philosophical Review and Mind had a huge impact on the evolution of 20th century philosophy. Academic rank still, you know, it sounds quaint, but if you look kind of carefully at some of the data, it seems that academic rank is only having an increasing impact on how we direct our attention, not less. So there's this recent article, I forgot to put the citation there, that citation inequality has been increasing. So the most highly ranked members of the academic community are gaining a larger share proportionately of the total citation count. And why? Basically because their co-author networks are expanding by many multiples. And so in this way, they're capturing a larger amount of citation. The full story is also because more and more academics are temporary, so they quickly leave the profession and they don't build up any type of reputation. So that inequality means that senior academics can capture more of the citations. And so the phenomenon of gift authorship is precisely an indication of how academic rank still matters hugely in how scientists direct their attention, in which papers to read, which papers to take seriously. Gift authorship is this practice, it's considered a form of misconduct, but where you give authorship to a senior academic in order to increase its visibility and credibility and thus citation count. So it's a rational response actually to the academic environment, even though it's frowned upon of course. So getting back now to the issue of pre-publication peer review, which as indicative of the role of gatekeepers will be kind of the main example for the rest of the presentation. What is pre-publication peer review? Simply a vetting by a small group of experts who are trusted by assumption. So that is a signal then of its trustworthiness to a larger community. So if we do away with that, well it's plausible that we can do away with it. Definitely in the age of open science, but precisely because there are different ways now of sifting through the massive amount of academic material being published every year on top of all of the existing work, namely search algorithms and social media recommendations. So I want to now look in more detail in the role of biases in search algorithms and social media recommendations to show that there, I'm not saying that there are downsides, but there are questions to be asked whether the drive towards transparency, if it doesn't produce new forms of opaqueness in areas that we might not realize them to be. Search algorithms, just let's focus on Google Scholar as increasingly the dominant place where academic material is sifted through. There's PubMed and Scopus and so on, but Google Scholar is more and more accepted and more and more used. The algorithm of Google Scholar is not public knowledge. So this is what Google Scholar says about its own algorithm. It aims to rank documents the way researchers do. Weighing the full text of each document, where it was published, who it was written by, as well as how often and how recently it has been cited in other scholarly literature. Of course, nobody can blame Google for not making it public, even though it's strange that in the age of open science, we're increasingly relying on a proprietary search algorithm. But if you make an algorithm like that public knowledge, it can be relatively easily gamed and that might be one reason why they don't want to do it. Nonetheless, some time ago back in 2009, it was partially reverse engineered by two German computer scientists and what they find citation count is the dominant factor in the Google Scholar algorithm. So there's no surprises there. But nonetheless, its proprietary nature is surely worrying for the core values of open science. But I think the problems go deeper, so there's not that much information about the algorithm of Google Scholar. But there is much more about Google's search algorithm for its main search engine. I mean, it's more public. The founders of Google, they published it in a scientific article back in 1999. Page rank. So page rank was purely formal, so it didn't directly evaluate the content of pages. Only looked at the number of connections. But Google quickly abandoned it because it was open to abuse. Strategies like keyword stuffing or putting in a lot of links to other websites and so on. So I find this amazing. Google now employs a lot of what they call search quality raters to tweak the results or how different factors are weighted. So they have this whole long document on guidelines for their search quality raters. And when you do a bit of a deep dive in how precisely are these decisions being made about search quality, you get to, and there's a little citation there, your money, your life topics. So financial websites and health websites. Careful checks for reputation are required. And what is reputation based on evidence from experts, professional societies, awards, and so on. And so Google's search algorithm is anchored on the judgments of professional societies of awards, which are in turn made by professional societies. And so I haven't used a meme yet in the presentation or in the talk, so I thought this was a good place to begin. I was flabbergasted the first time I read about this, but obviously I'm naive and place unthinking trust in Google. And I'm just amazed at how the search media algorithms which are presented as this neutral sifting through of information on the worldwide web boils down to very the same old fashioned markers of quality, namely professional judgments. So if you put it in steps, we want to take this decision making power away from the old model of gatekeepers, the professions and the peer review and so on. Instead, handing some or a lot who knows to private corporations, that's already a strange move in itself for something like open science. But these private corporations have found that they're purely formal algorithms are open to abuse, they can be gamed. So they're always tweaking it, correcting it, how by ultimately following the judgments of the professional gatekeepers via content evaluators, which they then hire. So it's just, you know, what's the lesson that I want to draw from this? It's not a value judgment, it's not bad. Google undoubtedly had very good reasons for making this step that they did, all sorts of abuses of the search engine. But it does show how the drive to transparency makes other changes necessary. So this is based on Tversky and Kahneman, of course, but in communication science, the typical distinction is to make between heuristic decision strategies and analytic strategies. This is where purely formal properties are looked at and the second where, you know, the agent actually goes through the content and makes a judgment call. And this distinction, you know, peer review is more on the analytic side, you know, by assumption there's at least two people going through carefully, going through the text. There are biases there as well, but it's designed to be analytic. You take that away, you know, we can't make purely heuristic strategies work. So Google has come full circle and is coming back that, well, these heuristics, they need to be anchored on the analytic decision making strategies. Okay. Until there, my discussion of transparency in search algorithms. Moving on now to that second class of strategies for deciding what scientific material to pay attention to is social media recommendations. So it's increasingly clear that promotion on social media makes a big difference for uptake by the community. So of course, these are these are correlations, not causations. So it could be, of course, that the most influential researchers to begin with are the ones who tweet the most, it's probably likely part of the effect as well. So I don't know if these studies actually disentangle that kind of, you know, do the intervention, how much causal difference does tweeting actually make. But the effect sizes are huge. If you just look at the difference between tweeted and non-tweeted, these are for colo proctologists and thoracic surgeons. But I don't, and of course, I don't know to what extent that would generalize. But in any case, there are some indications that social media status, right, a number of followers once presence on social media can translate into academic status, citation count. And, you know, when citation count is taken as a proxy for scientific worth, right, as proposed by some, that it's the true long term value, the long term, sorry, it's a true measure of long term worth. And we have a situation where promotion on social media can strongly impact perceptions of scientific worth. But the question is then, what is it that determines social media status? And here, the old biases are coming in to play, just to give one example, gender bias. Again, just basing this on one study, of course, it sounds plausible. So, I mean, it fits a certain narrative, of course, right, because this is kind of something that we would be expecting. So with that caveat in mind, of course. But nonetheless, you know, striking results. This study here, recent study about gender differences and Twitter use. Does rank institutional prestige and so on also promote social media status? I don't know. I haven't come across any studies. But of course, one wouldn't be surprised if it did. So, to recap the argument and systematize it a little bit more. I pointed to the feature, it's just a feature of the academic environment, scientific environment of information overload. And the developments of open science won't change information overload. It's in fact, as we publish more and more rapidly, information overload will likely only increase. So, if you couple that two proposals to get to abolish pre-publication peer review, then more work needs to be done by the selection mechanisms, such as social media recommendations or search algorithms, or a combination of two of the both of both, of course, because what we see appearing on our social media feed is itself being selected for by algorithms. And so, more of the decision making of what is valuable and what is not valuable is being done by these opaque proprietary algorithms. So, I'm not saying that this is necessarily negative for science, although it does raise some doubts. But the argument can be made with more certainty that it undermines this idea that you can just promote transparency in a consistent way. So, if you promote transparency in one area, so in publishing and communication, that increases information overload increases the need for a selection mechanism and how precisely that selection mechanism occurs becomes less transparent. And this lack of transparency constitutes, in different ways, new, less obvious barriers to open science. So, proprietary search algorithms or one promotion on social media represents another set of barriers, barriers based on gender, based on rank, institution, and also a barrier for those who do not wish to invest that same level of time and energy on social media. So, we're kind of in a strange situation where we complain about how much time we need to invest in grants and then wasted time on peer reviewers who are unfair and so on. But, you know, our time spent on social media is increasing and, you know, where's the discussion there? Is that always such a good idea? And finally, the erosion of norms of collegiality and trust. Oh, I see Charles reappearing. I'm done actually, which I've, which many people have talked about and which I've touched upon in other work. So, take away, this being predominantly negative argument, but it's not really against open science. It's more against inspired or from this idea that there are no free lunches. So, I'm just wondering what is the price to pay? There has to be some price to pay. And if we reduce the reliance on one flawed mechanism, well, we're going to increase our reliance on other flawed mechanisms. And is that a problem? Also, not necessarily, but we just should be aware that these mechanisms are flawed. There is a remaining, a residual uncertainty on what the value is of any academics or any scientists' work. But these selection mechanisms are constructed in such a way that they give the appearance of a clarity or of a lack of uncertainty about the value of work. So, parting words then. Open science surely represents the future. But let's be aware of the challenges. Thank you. Fantastic. I didn't mean to pressure you. I was just getting my, making sure my camera would still work. No, I was checking out my own timing. I went to business. We've got a number of questions already coming in. So, let me, yeah, let me get, let me get right to it. So, one from, from Stefan Hesperigan. This is, this is a nice point. So, and this, this actually probably relates to your, to your last takeaways. There's a good, good connection here. So, this talk seems to describe, describes a world that in which libraries no longer seem to exist, right? We have institutions that are full of professionals who are trained to help us cope with information overload. So, how do you think the picture changes if we, if we start to rethink the role of libraries as mediators or facilitators of open science? Is there, actually a follow-up from Alex, mostly in a comment, in a comment on the question, like, so is there, is there a call for a new role for library sciences to play here as a field, perhaps. Kind of new forms of curation basically then. So, like, the librarian, the, the, I mean, the traditional task of the librarian is to, to make a, some type of pre-selection, of course, right? Which books are worth adding to the collection and which aren't? Yeah, I know I am describing a world when libraries don't exist. And I don't know to what extent that describes most academics today, surely not all. There are those who need to do archival work. But I think in the natural sciences, yeah, paper versions of journals are less and less being published. But you could say, well, we need more digital librarians. And I guess you see signs of that emerging. You know, like these annotated bibliographies, systematic reviews, of course. I mean, this, this work of trying to systematize knowledge in kind of a curated way. Yeah. But does, yeah, does, is that how would you square that with some of the fundamental values of open science? Because you're giving power then to this librarian to make decisions on what is valuable and what is not. So, I mean, the scope of my argument just concerned how consistent the rationale is or one of the rationales under underlying open science, namely that more transparency is a good thing. And there's just too little talk about the problem of information overload. And so I probably would agree that we will need librarians in the future or some, some type of librarian. Sure. Next question coming in from you know Petrovich says thanks for your talk very interesting. I'm wondering, however, whether your argument could be sort of recruited by the big publishers to defend the expensive paywalls behind which they're guarding scientific publications right paywalls might be the just cost for scientific quality, good peer review and so on, etc. And so they can justify their whole, you know, profit margin. And so that since he adds an oh he adds in a comment here so since you've since one of the big arguments for open science is sort of monopoly busting. Maybe that that that's a consideration that can cut against, I guess, some of the some of the other the other claims here perhaps. When you make a critical argument, there's always a giant danger of it being abused, you know, in the discussions about, you know, evolution versus creationism. Every biologist who had some critical remarks about Darwinism was immediately pounced on by the creationists. And so, so let's, I would just say first, let's not make these critical arguments known to big publishers and keep it among a more closed circle. That's not very open, of course, but but I mean, I was I had my my argument was mainly focused on this idea of abolishing peer review. And just, you know, more than just, I mean, that goes way further than just abolishing the type of rentiership of the big publishing houses, like, like Springer and so on. I mean, that goes, I mean, that would be like the abolishment of journals and of editors that we just post everything to an online repository. This huge online repository where everything is there. And you can't know whether it's a draft to be revised or and so on. And so I'm targeting more the that extreme scenario and pointing to reasons why that doesn't that scenario doesn't really make sense in the end, if you think through the consequences. But yeah, the position of the big publishing houses. That's almost that's almost not an intellectual. I mean, that's not an academic question anymore. That's just a political struggle. I mean, I don't want to become too political here, even though probably everybody agrees, but Fair enough. No, fair enough. Next question. Let me see up just I'm trying. It's not resorting my, my questions by votes next question comes in from from rose trapeze who who has. So are do you know are there any other search engines or databases with published search algorithms. How do they fit into the picture because there is there a way to sort of open this this black box a little more. Probably so I mean the basic problem is that Google isn't as a man up and entrenched because apparently so to make a good search algorithm. You need more than just, you know, a bunch of computer whizzes having a great algorithm you need a lot of data and a lot of users using your search algorithm so you can actually tweak. So you can, you know, figure out what are the most common spelling errors or, you know, people in Louvain and what are they mainly searching for and give different results than for Louvain and then for New York and so on. And so, even if it would be published. I don't know how much it would change the position of Google. The second thing then to keep in mind is that there's a there is a principled reason for not publishing it, because once it's public knowledge, it can be gained in abusive ways. And of course, I think that's the main argument that Google would use not to publish it. And, you know, with some justification, there's this whole science of search engine optimization. So people know what Google likes. And there's a there's an art or a science to it. And so they try to tweak their content in such ways that it'll rise up. So, you know, through some reverse engineering people know some of the large scale features of what Google likes. Of course, a virtuous form of that would be where you would gain game Google by just publishing qualitative content. And so then we're then there's really there's no problem in that case, but that's not this that's not the situation. So if you rely on heuristics, formal characteristics, you know, you can have the appearance of quality, but without having the content of quality. So that's why these algorithms can be can be gained in vicious ways. So, Sure. Oh, just actually a quick note from the chat actually Dan Hicks mentions says I'm wondering if dimensions.ai or Microsoft academic search might be more open than then Google scholars there could be an interesting kind of comparative data set there in seeing if if there's something to be learned from their guts. I haven't looked into that. Next question comes in from from Sven Lindquist who asks, who says I'd suggest you do you think there's an appetite sufficient appetite for a slow science movement so a movement away from all the flash in the pan stuff that gets published on on social media and so how could that kind of thing exist in in the current sort of reward system of of doing science. I don't know. It'll probably get worse before it gets better. Yeah, I mean, you're, you're right, there should be some type of slow science movement and there is. There are some kind of. I remember coming across some references. Let's let's do slow science. But that's the problem because our minds are so the it's, you know, the status biases are so strong have such a strong hold on our mind and and you see a person and other H index is 125. Oh my goodness. That is so amazing. And it seems to be entrenched that way because it's viewed as a status indicator and everybody accepts it as a status indicator. And it's it's difficult to change until then the the the downsides just become too great and too obvious to deny. But yeah, I'm not I'm not super optimistic about calls for slow science just yet anyway because there's a difference between calls for it. But and then but actually changing how administrators think about science and how or that that that level of scientists who are, you know, politicians and managers, you know, the beneficiaries of the system. They're also very low to change. Yeah, yeah, yeah. Let me ask for a quick response because I just want to I want to eat a tiny bit of our upcoming 10 minute break we have a long break here so I can get away with this. Just to ask the last question from from Luca Revelli and I'm going to even I'm going to even summarize his question a little bit apologies Luca. I wonder so do you think perhaps one way to interpret what's happening here is to say open science needs to come with open social media so maybe part of the problem here is the sort of nature of commercial social media, as opposed to a hypothetical future academic social media maybe there's a way out there to resolve some of those of those issues. Do you think that the arguments that you made are just going to this is sort of too tied up with human nature and we're just going to fall back into I don't know. Yeah, I'm a bit more inclined to the second. Yeah, I don't know what it would look like a social media. As described there. You know, because like you should think okay let's take away they're experimenting with that. So let's let's hide the number of likes. I think Facebook was experimenting with that in some countries, I think Australia, they're experimenting with us on a local level. Let's let's hide the number of likes because it's a source of anxiety for users. But, you know, one of the reasons to have this type of like system is precisely also to allow us to focus on to make some kind of distinction about what post to focus on. So, yeah. My more positive message, just not to be entirely pessimistic is is that in this discussion it's often too much focused on how we should create structures social structures that are ideal. But perhaps there are just no flawless social structures social media and so on. And some degree of a healthy skepticism is in order, you know, so that just not to give too much credence and too much value to the the evaluations that we see on social media and search algorithms. And here with you for that matter.