 Please give a warm welcome here. I'm gonna, it's Francesca, Theresa and Judith. Judith, you have the stage. Thank you. Thank you. Thanks. I believe that scientific performance indicators are widely applied to inform funding decisions and to determine the availability of career opportunities. So those of you who are working in science or have had a look into the science system, might agree to that. And we want to understand evaluative bibliometrics as algorithmic science evaluation instruments to highlight some things that do occur also with other algorithmic instruments of evaluation. And so we're going to start with a quote from publication in 2015, which reads, as the tyranny of bibliometrics tightens its grip, it is having a disastrous effect on the model of science presented to young researchers. So we have heard the talk of Hanol already and he's basically talking about, also talking about problems in the science system and the reputation by the indicators. And the question is, is bibliometrics the bad guy here? So if you speak of tyranny of bibliometrics, who's the actor doing this? Or are maybe bibliometricians the problem? And we want to contextualize our talk into the growing movement of reflexive metrics. So those who are doing science studies, social studies of science, scientometrics and bibliometrics, the movement of reflexive metrics. So the basic idea is to say, okay, we have to accept accountability. If we do bibliometrics and scientometrics, we have to understand the effects of algorithmic evaluation on science and we will try not to be the bad guy. And the main mediator of the science evaluation which is perceived by the researchers is the algorithm. So I will hand over the microphone to, I will not hand over the microphone, but I will hand over the talk to Theresa. She's going to talk about the identification of scientific evaluation. Okay, I hope you can hear me, no? Yes, I don't know. Okay. I have the last part. So when we think about the science system, what do we expect? What can society expect from a scientific system? In general, we would say reliable and truthful knowledge that is scrutinized by the scientific community. So where can we find this knowledge? Normally in publications. So with these publications, can we actually say whether science is bad or good or is there better science than others? In the era of digital publication databases, there's big data sets of publications and these are used to evaluate and calculate the quality of scientific output. So in general with this metadata, we can tell you who's the author of a publication, whereas the home institution of this author or which types of citations are in the bibliographic information. So this is used in the calculation of bibliometric indicators. For example, if you take the journal citation, the journal impact factors, which is a citation-based indicator, you can compare different journals and maybe perhaps say which journals are performing better than others or if the journal factor has increased or decreased over the years. Another example would be the Hirsch Index for individual science, which is also widely used when scientists apply for jobs. So they put these numbers in their CVs and supposedly this tells you something about the quality of research those scientists are conducting. So with the availability of the data, we can see an increase in its usage and in a scientific environment in which data-driven science is established scientific conduct decisions regarding hiring or funding heavily rely on these indicators. And there's maybe a naive belief that these indicators that are data-driven and rely on data that is collected in the database is a more objective metric that we can use. So here's a quote by Rita and Simon. In this brave new world trust no longer resides in the integrity of individual truth tellers or the veracity of prestigious institutions but is placed in highly formalized procedures enacted through disciplined self-restraint. Numbers cease to be supplements. So we see a change of an evaluation system that is relying on expert knowledge to a system of algorithmic science evaluation. In this change, there's a belief in the depersonalization of the system and the perception of algorithms as the rule of law. So when looking at the interaction between the algorithm and scientists, we can tell that this relationship is not as easy as it seems. Algorithms are not in fact objective. They carry social meaning and human agency. They are used to construct a reality and algorithms don't come naturally. They don't grow on trees and can be picked by scientists and people who evaluate the scientific system. So we have to be reflective and think about which social meanings the algorithm holds. So when there's a code that is used, that the algorithm uses, there's a subjective meaning in this code and there's agency in this code and you can't just say, oh, this is a perfect construction of the reality of scientific system. So the belief that this tells you more about the quality of research is not a good indicator. So when you think about the example of citation counts, the algorithm reads the bibliographic information of a publication from the database and so scientists, they cite papers that relate to their studies, but we don't actually know which of these citations are more meaningful than others. So they're not as easily comparable but the algorithms give you the belief they are. So relevance is not as easily put into an algorithm and there's different types of citations. So the scientists perceive this use of the algorithms also as a powerful instrument and so the algorithm has some sway above the scientists because they rely so much on those indicators to further their careers, to get a promotion or get funding for their next research projects. So we have a reciprocal relationship between the algorithm and the scientists and this creates a new construction of reality. So we can conclude that governance by algorithms lead to behavioral adaptation in scientists and one of these examples that uses the science citation index will be given from Francisca. Thanks for the handover. Yes, let me start. So I'm focusing on reputation and authorship as you can see on the slide and first let me start with a quote by Eugene Garfield which says, is it reasonable to assume that if I cite a paper that I would probably be interested in those papers which subsequently cited as well as my own paper? Indeed, I have observed on several occasions that people prefer to cite the articles I had cited rather than cite me. It would seem to me that this is the basis for the building up of the logical network for the citation index service. So actually this science citation index which is described here was mainly developed in order to solve the problem of information retrieval. Eugene Garfield also founder of this science citation index short SCI noted or began to note a huge interest in retroprocal publication behavior. He recognized the increasing interest as a strategic instrument to exploit intellectual property and indeed the interest in the SCI and its data successively became more relevant within the disciplines and its usage extended. Later, a price to solar and other social scientists asked or claimed for a better research on the topic as it currently also meant a crisis in science and stated if a paper was cited once it would get cited again and again. So the main problem was that the rich would get richer which is also known as the Matthew effect. Finally, the SCI in its use turned into a system which was and still is used as a retroprocal citation system and became a central and global actor. Once a paper was cited, the probability it was cited again was higher and it would even extend its own influence on a certain topic within the scientific field. So it was known that you would either read a certain article and people would do research on a certain topic or subject. So this phenomenon would rise to an instrument of disciplining science and created power structures. Let me show you one example which is closely connected to this phenomenon I just told you about and I don't know if here in this room there are any astronomers or physicists Yeah, there are a few. Okay, that's great actually. So in the next slide, here we have a table with a window from, a time window from 2010 to 2016. And social scientists from Berlin found out that the co-authorship within the field of physics extended by 58 on a yearly basis in this time window. So this is actually already very high but they also found another very extreme case. They found one paper which had around about 7,000 words. Yeah, and mentioned authorship of 5,000. So you would, in average, the contribution, each scientist or researcher of this paper who was mentioned contributed was 1.1 word. Sounds strange, yeah. And so of course you have to see this in a certain context and maybe we can talk about this later on because it has to do with Atlas, particle detector which requires high maintenance and stuff, but still. So the number of authorship, and you can see this regardless which scientific field we're talking about generally increased the last years. And so it remains a problem and especially for the reputation, obviously, it remains a problem that there is such high pressure on nowadays researchers. Still, of course, we have ethics and research requires standards of responsibility. And for example, there's one, there are several ones but there's one here on the slide, the Australian court for the responsible conduct of research which says the right to authorship is not tied to a position or profession and does not depend on whether their contribution was paid for or voluntary. It is not enough to have provided materials or routine technical support or to have made the measurements on which the publication is based. Substantial, intellectual involvement is required. So yeah, this could be one rule to work with or to work by, to follow. And still we have this problem of reputation which remains and we're hand over to Judith again. Thank you. So we're going to speak about digitization now. So if you put this point of reputation like that, you may say, so the researcher does find something in this research and his or her research and addresses the publication and describing it to the community and the community. The scientific community rewards the researcher with reputation and now the algorithm which is perceived to be a new thing is mediating the visibility of the researcher's results to the community and is also mediating the rewards. So the career opportunities of the funding decisions and so on and what happens now and what is possible to happen is that the researcher addresses his or her research also to the algorithm in terms of citing those who are evaluated by the algorithm he wants to support and also in terms of keyword as strategic key wording and so on. And that's the only thing which happens new might be a perspective on that. So the one thing new, the algorithm is addressed as recipient of scientific publications. And it is like far fetched to discriminate in so-called invisible colleges and citation cartels. What do I mean by that? So invisible colleges is a term to say, okay, people are citing each other. They do not work together in a co-working space maybe but they do research on the same topic and that's only plausible that they cite each other. And if we look at citation networks and find people citing each other that does not have necessarily to be something bad. And we also have people who are concerned that there might be like citation cartels. So researchers citing each other, not for purposes like the research topics are closely connected but to support each other in their career prospects. And people do try to discriminate those invisible colleges from citation cartels expost from looking at metadata networks of publication and find that a problem and we have a discourse on that in the bibliometrics community. So, and I will show you some short quotes, what people like, yeah, how people talk about those citation cartels. So for example, Davis in 2012 said, George Frank want us on the possibility of citation cartels, groups of editors and journals working together for mutual benefit. So we have heard about the journal impact factors. So they, it's believed that editors talk to each other, hey, you cite my journal, I say to your journal and we both will boost our impact factors. So we have people trying to detect those cartels. And Martin and I wrote that we have little knowledge about the phenomenon itself and about where to draw the line between acceptable and unacceptable behavior. So we are having like moral discussion. So about research ethics. And also we find discussions about the fairness of the impact factors. So Young and I wrote disingenuously manipulating impact factor is the significant way to harm the fairness of the impact factor. And that's a very interesting thing I think because why should an indicator be fair? So to believe that we have a fair measurement of scientific quality, relevance and rigor in one single like number, like the journal impact factor, is not a small thing to say. And also we have a call for detection and punishment. So Davis also wrote, if disciplinary norms and decorum cannot keep this kind of behavior at bay, the threat of being delisted from the JCR maybe necessary. And so we find the moral concerns on right and wrong. We find the evocation of the fairness of indicators and we find the call for detection and punishment. And when I first heard about that phenomenon of citation cartels, which is believed to exist, I had something in mind which sounded like familiar to me because we have a similar information retrieval discourse about ranking and power in a different area of society in search engine optimization. So I found a quote by Page and Al who developed the page rank algorithm, so the Google's ranking algorithm in 1999, which has changed since then a lot, but they wrote also a paper about the social implications of the information retrieval by those indicators by the page rank as an indicator and wrote that these types of personalized page ranks are virtually immune to manipulation by commercial interests. For example, fast updating of documents is a very desirable feature, but is abused by people who want to manipulate the results of the search engine. And that was important to me to read because we also have like a narration of abuse, of manipulation, the perception that that might be fair. So we have a fair indicator and people try to betray it. And then we had in the early 2000s, I recall having a private website with a public guest book and getting links from people who wanted to boost their Google page ranks. And shortly afterwards, Google decided to punish links BAM in their ranking algorithm. And then I got lots of emails of people saying, please delete my post from your guest book because Google's going to punish me for that. And we may say that this search engine optimization discussion is now somehow settled and it's accepted that Google's ranking is useful and they have a secret algorithm but it works and thus it's widely used. And although the journal impact factor seems to be transparent, it's basically the same thing that it's accepted to be useful and thus it's widely used. So the journal impact factor is the SCI and the like. And we have another analogy so that Google decides which SEO behavior is regarded acceptable and punishes those who act against the rules and thus holds an enormous amount of power. So which has a lot of implications led to the spreading of content management systems for example with search engine optimization plugins and so on. And we also have this like power concentration in the hands of Clara Witt, former Thompson Reuters who host the database for the journal impact factor and they decide on who's going to be indexed in those journal citation records and how is the algorithm in detail implemented in their databases. So we have this power concentration there too and I think if we think about this analogy we might come to interesting, yeah, thoughts. But so our time is running out so we're going to give a take home message too long, didn't read. So we find that the scientific community reacts with codes of conduct to a problem which is believed to exist, the strategic citation. We have database providers which react with sanctions so people are delisted from the journal citation records or journals are delisted from the journal citation records to punish them for citation stacking. And we have researchers and publishers who adapt their publication strategies in reaction to this perceived algorithmic power. But if we want to understand this as a problem we don't have to only react to the algorithm but we have to address the power structures. So who holds these instruments in their hands if we talk about bibliometrics as an instrument and we should not only blame the algorithm so hashtag don't blame the algorithm. Thank you very much. Thank you to Francesca, Teresa and Judith or in the reverse order. But thank you for shining a light on how science is actually seen in its publications and as I started off as well it's more about scratching each other a little bit. I have some questions here from the audience this microphone too please. Yes, thank you for this interesting talk. I have a question you may be familiar with the term measurement dysfunction that if you provide a worker with an incentive to do a good job based on some kind of metric then the worker will start optimizing for the metric instead of trying to do a good job. And this is kind of inevitable. So don't you see that maybe it could be treating the symptoms if we just react about code of conduct tweaking algorithms or addressing power structures but instead we need to remove the incentives that lead to this measurement dysfunction. So you refer to the, so I would refer to this phenomenon as like perverse learning. So learning for the grades you get but not for your intrinsic motivation to learn something. And that's, yeah, we observe that in the science system but if we only adapt the algorithms so take away the incentives would be like, yeah, do not, you wouldn't want to evaluate research at all which you can probably, yeah, want to do. But to whom would you address this call or this demand? So please do not have indicators or, so I give the question back to you. Okay, questions from the audience out there on the internet, please, your mic is not working. Okay, then I go to microphone number one, please sir. Yeah, I want to have a provocative thesis but I think the fundamental problem is not how these things are gained but the fundamental problem is that we think the impact factor is a useful measurement for the quality of science because I think it's just not. That is, that was obvious. Yeah, I would not say that the general impact factor is a measurement of scientific quality because no one has like a definition of scientific quality. So what I can observe is only people believe this general impact factor to reflect some quality and maybe they are chasing a ghost but whether that's a valid measure is not so important to me even if it were a valid measure, it would concern me how it affects science. Okay, question from microphone number three there, please. Yeah, thanks for the interesting talk. I have a question about the 5,000 author paper. Was it the same paper published 5,000 times or was it one paper with 10 page title page? No, it was one paper counting more than 7,000 words and the authorship, so authors and co-authors were more than 5,000. Doesn't it, isn't it obvious that this is a fake? Well, that's what I meant earlier when saying you have to see this within its context. So physicists are working with Atlas, this detective system and as there were some physicists in the audience they probably do now how this works, I do not. But as they claim it's so much work to work with this and it's, as I said, requires some high maintenance. It's, they obviously have, yeah. So everybody who contributed was listed. Exactly, that's it. And if this is ethically correct or not, well, this is something which needs to be discussed, right? This is why we had this talk as we want to make this transparent contributed to an open discussion. Okay, I'm sorry guys, I have to cut off here because our mission out there in space is coming to an end. I suggest that you guys find each other somewhere, maybe the tea house or something. Sure, we're around, yeah. We're here. I would love to have a last applause for these ladies for really with their lights on how these algorithms not or are working. Thank you very much.