 Hi, everyone. Welcome to our session on bolstering accountability and self-scepticism in the MediScience movement. We'll get started in just a minute or so as people trickle in. It'll be just the four of us this evening. Unfortunately, Hilda Bastion couldn't make it. And today's session will be mostly a panel discussion, so we'll start with just some brief remarks from each of the panelists. And then I've prepared some questions, but you should also all feel free. We have a long time and we want to spend a lot of time on your questions. So please add your questions in the Q&A segment of Zoom, and we'll get to your questions after the panel remarks. Okay, I think we should get started. So I will go around to the different panelists. We didn't talk about orders. I'm just going to spring it on one of you to go first. And I think we agreed no slides. If you have slides, go ahead and go. Otherwise, I'll probably change the view so that we focus just on the speaker during their remarks, and then come back out to all of us. How about Bart, would you mind going first? Sure. No problem. Let me start by saying thank you for inviting me to be a part of this discussion, and I am quite looking forward to talking about accountability and skepticism for the next 90 minutes or so. I've prepared a few thoughts, especially on accountability in science and in public institutes. And I want to start with acknowledging the work of a recent PhD student that I supervised, and they're just finished last year, whose work has dealt with accountability, public accountability, and has simulated me in thinking about the matter more and more deeply. So, Xiong Hui, thank you very much for that. If you're listening, I hope so at least. Otherwise, you'll find out that I talked about you. But during this conference, and as part of many events that preceded it and papers that have been written, we've heard many definitions of meta science and various perspectives on the relationship between meta science communities and practices, and other communities and practices, such as HBS, History and Philosophy of Science, and STS communities, as well as possible ways to strengthen existing ties, craft new ones. And accountability can be a great lens through which we can look at those relationships, what they are, what they might become. Accountability and especially public accountability is, however, also in itself, an objective study and scholarship on the topic can help guide us, asking questions that are worthwhile, that are valuable, that can help us in these conversations. And a prominent discussion of accountability, prominent because it's the Oxford Handbook of Public Accountability starts with it, is the answerability of any actor to others who have a legitimate standing to demand various information about the performance of tasks, outcomes and procedures by an actor and explanations and justifications of this performance. This describes accountability as a relationship between actors between those are held accountable, and those who are holding them to account. And it's also relational, because it links actors to others for whom they perform tasks, or who are directly or indirectly affected by their performance. And in the case of meta science, the relationship between meta science and science will inform how that accountability can look like. Is meta science a part of science? Or do we place it outside or above science? One of the original meta sciences, the philosophy of science, for the most part did not consider itself strictly to be a part of the sciences. At least the sciences in the Anglo-Saxon interpretation of the word. And successors like sociology of science at some point did adopt symmetry principles that prescribed that they themselves and their work were subject to the same restrictions and prescriptions as the practices that they were studying, including ideas about whom to whom they were accountable. So is meta science continuing this tradition? And what would that mean in terms of accountability? And when we talk about accountability, especially public accountability, we can also distinguish different ideas of what actually that word public means. And in one meaning, public refers to a characteristic of the process, openness, transparency towards citizens, but also other potential publics, very much, of course, in line with what we expect practices like open science to be like. And in politics, information is provided about the performance of institutional actors through hearings, debates, parliaments, and they're generally open to citizens. They're on TV or something like that. And assessments and judgments are shared with citizens. In science, characteristics of knowledge production and produced knowledge can also be open, open protocols, open data, open access publications and the law. In another meaning of public, public refers to the recipients of account giving the public as those holding scientists or possibly meta scientists to account. And that way, accountability focuses on matters of public concern, exercise of power, the conduct of public institutions, how quality of labor and science is one of these institutions and it wields power. So it should be held to account. And in a third meaning, public can refer to the perspectives and standards of the accounting process. So how do you give and hold account? And that means that public accountability implies rendering an account in such a way that it is accessible to the public, but also with a view on the public interests and what matters to them, what concerns them, their responsibilities. And most people when they think about public domains in which science are active, they tend to assume that it operates in a democratic context where diverse voices and perspective play a role and in discussing standards of accountability. But we know that not all our practices and institutions actually work that way, not even all of our democracies actually work that way, sadly. But in line with that, accountability has two faces and we see both of them in the context of how scientists are held to account. And one face is accountability as a virtue. Statements like open science is just science done right is a great example of a way that you could frame that. Accountable science contributes to democratic governance, accountability in general is seen as quite desirable as a quality of institutions and officials and scientists. The willingness to act transparently, fair, comply with rules and do so as equitably as possible. But on the other hand, accountability can become a political or administrative mechanism to wield power to assess how institutional arrangement operates in terms of efficacy and what the effects are. And in that mode, the focus of accountability is not whether actors have behaved transparently or responsibly, but whether and how they can or can be held to account for incidents, potential misconduct, but also in a positive sense for potentially exceeding whatever target it is that they should have been striving for. Ultimately, as a mechanism to discipline and possibly punish them. And if we talk about meta science as a community holding science to account, we can actually see mixtures of these images, appeals to virtue, but also appeals to disciplining and possibly exposing or punishing. So both ideas of what accountability can be. And it's quite important to study meaningful forms of accountability and that study design meaningful forms of accountability. Because after all, public or the increasing focus on public accountability has not only produced openness and transparency in general, but in various institutions has also produced excessive costs, excessive bureaucracy, red tape, and can have negative effects on values that we actually try to promote in science, for instance, trust and honesty, especially when accountability is centered on discipline and punishment and the stick rather than the carrot in a sense, systems may embed assumptions of dishonesty and institutionalize distrust, timesheets, monthly reports, output metrics, has student evaluations, required responses to those, all of that are also forms that accountability can take. But so is open science. So it matters which trajectory we take when it comes to accountability. And overall, we tend to associate accountability with, let's say, even socialization processes. Open science is presented as better, more civilized science. And meta science could be a community that fuels that civilization process, because it can help create display distinction between or to demonstrate how and why one study or one group of studies meaningfully differs from another. And that process of creating distinction as an engine behind civilization was already described 80 years ago or so by sociologist Norbert Elias. And I think it would be an interesting way to look at meta science and its relationship to science through that lens, especially the dynamics of making and maintaining distinction. Presenting oneself as more civilized requires maintenance of that distinction. And in other words, when others follow and try to also be as civilized as these well, elites trying to distinguish their civilized ways, more and perhaps more radical forms of distinction are required to uphold an ongoing civilization process. And if that process is fueled by accountability, giving account, holding each other to account, there is this risk of excessive cost bureaucracy and all the things I just mentioned. Accountability exists on the spectrum between virtues on the one hand and some Kafkaesque monstrosity on the other hand. And we run the risk of taking a deficit of accountability for granted up to a certain point and propose continuous introduction of new forms that might be new bureaucracies. So I would ask that we promote reflection on how we hold each other to account and invite perspectives from HBS and STS to the table, but also organizational studies and the people I just mentioned in the sense that the ones who study accountability in public institutions and have been doing so for decades. In order to find out what sorts of data, what sorts of narratives do we need? And where and when do we maybe even end a call for more accountability in order to leave Kafkaesque out of our labs? Okay, thanks. Great. Thank you so much. Great. I also wanted to mention I just received Hilda's comment, so she said she might send me some comments to read, so I will read some of her comments after the panelists who are here. So we'll save a little bit of time for that. So, Thomas, we'll go to you next. Thank you. Thanks for inviting me. It's nice to be here. Thanks also, Barth, for the introduction you did so far. I was wondering a little bit how much sort of conceptual I wanted to say about accountability, and I was a bit scared when you started because there can be so many definitions that you might get into some sort of conceptual problem, basically discussing the terms, but so far I was, I all agreed that I very much enjoyed it and liked it. So I will try to perhaps add on to that a little bit while I'm a professor in public administration, specializing in accountability, so I feel happy to talk about accountability and a little less secure talking about meta science. Actually, I learned about the concept after I was invited to be here, so I had to prepare myself a little bit for that. So what I have to say will relate both to meta science, but also to science, and particularly, as I understood it, if meta science is about the science of science, I would say it is still science, and then we can talk about accountability of science, basically. So I will say a few things. I just prepared a couple of sort of discussion points or simply observations, which we can relate to afterwards, perhaps if they spark any response or something other. So my first comment would be that when we talk about accountability, it matters a lot whether you approach that as, say, a legal scholar or someone building institutions, or whether you approach it as a psychologist, someone looking at how people perceive it. There can be major differences between the systems we design and how people perceive them, and that is an important distinction because you want those systems of accountability to have some effects on individuals, or at least have broader effects in the field, but they work through individuals. And the individual perceptions of accountability, they are somehow, say, like a bleak mirroring of the official systems, but are also affected by other things. So if you want to sort of understand and improve accountability of science and meta science, it is important to look both at, say, the systems that we have, but also on how people perceive their accountability, or how researchers perceive their accountability. The second comment would be if we think about accountability of scholars, I think it would be good to take a, say, a perspective of, to see a research as a profession, just like, say, medical professionals. In the end, even when you're in a hospital, there is someone who is the boss of the hospital, but there's the medical doctor, he or she takes a decision, what treatments to take at some point, right? There's a medical professional standard to which you have to comply. And if you do a bad job, you will in principle be sanctioned by your peers, because your peers as fellow professionals, it is their task to say whether or not this was, say, responsible professional conduct, in a sense, irrespective of what the manager of the hospital thinks. And I would say the same thing would in principle also applies to scholars. We assess ourselves, we hold each other accountable, and whether or not something is, say, responsible, a research practice is in the end something that we as a research community should decide. And we need, say, systems of accountability, which are also felt as relevant, say, on a professional level. Well, third comment, if you then look at how accountability is enacted and only do so, say, a little bit superficially, one could be quite happy. Because before we publish something, there is a rigorous peer review. It is double blind and people are really trying to assess whether or not research is a good quality. There are journals, there's people also who run the journals trying to keep that up to a high professional practice. If you apply for research funding, very often or mostly, somehow, peers are looking at proposals and they're saying that these are good proposals. If you want to be promoted to a higher position, it's your peers who say whether or not you are of a sufficient quality, however you find to be promoted. So to some extent, you would say, actually, we do have really good systems of professional accountability in our field. On paper, I would say. Okay. Then I think this is the fourth comment. I didn't really count it. This is a bit dangerous, but I think it's the fourth comment would be, even though we have apparently some systems which are, say, related to the idea of professional accountability, how to operate in practice is, let's say, suboptimal. And at least one of the problems there would be that it's seriously understaffed because the quality control depends on peer review. But I can't oversee this for all sciences. I can only say this for what I can see. But reviewing is something you choose to do. You do not get time for it. There's no real expectation there. I mean, this may differ in different fields. I don't know. So I can only speak from what I see. But if this peer review is so important, journals and review processes, but also for proposals and having the opportunity to do research, in my view, I would say we put very little resources into the quality of those review processes. If you go on Twitter and look at discussions about the quality of reviewing, of the quality of journals, you know, there's quite some criticism sometimes about how we do that. Part of that is also because we are, I take this from Christopher Hood, but made a nice distinction between, say, competition and mutuality. So there's mutuality. There's group norms. So to some extent we're all scholars, but simultaneously there's competition between individuals, but also between institutions, but also between academic approaches, certainly again in the fields that I can oversee, but probably much broader, that in many cases, I believe perhaps the competition part is stronger and more influential and tougher in its consequences than the mutuality dimension, where in my view, we see many different types of research approaches. I would say that the right question would not be whether or not someone performs a type of research you yourself like or is in the same school that you are, but whether or not she is doing a good professional job in whatever school it is he or she is operating. So right now I feel that there's a lot of, say, competition to some extent violating those peer review processes of accountability, which may harm the quality of those systems. And then finally, my last point, in addition to the official accountability systems, we have additional things which are important. There's university politics, there are ideas about the quality of research, but there's also promotion, logics, there's all sorts of other things, which if you look at how researchers feel and experience their accountability, then maybe other things they feel are important than necessarily doing top-level research. Right, there's for instance, just an example for myself to end with, I publish and write several sorts of things. I do my best to make as good as I can. Quite recently I wrote something for which I was asked to write something just from the top of my head in one month, and I did so. So in one month, very shortly, 1500 words, not a very big thing. I think it probably took me one and a half day or something, but it came out at an outlet which has quite some standing. It's not an academic publication, but it's a publication with sort of societal impact. And I've been congratulated so much and people have been re-tweeting it and giving me all of these feedback, whether this was something fantastic. But it only literally took me half a day of thinking and a bit of typing, right? It means in terms of the quality of my work, this is probably among the lowest things that I've ever done. I think it has some value because it's just some thoughts, and I have some knowledge, so I try to bring some things together. There's not so much wrong with it, but in terms of what I got credit for in terms of how professional accountability works, people really liked it. For another example, I once published a book. I've published a couple of books, but once I published a book, I did a presentation. Hardly anyone came there. It was a rather stale affair. Then I was interviewed by the Best Dutch National newspaper. People saw the interview and they got a lot of feedback. Oh, fantastic interview, really interesting research. So by being in the media, I got much more rewarded in terms of sanctioning or how people were looking at my work. It came through the media. I do not have any media personality, so it's not. I mean, I do not have that. But you can see that say, additional mechanisms beyond the formal accountability mechanisms may have a strong impact on what we do, what we are rewarded for, and how we develop. So for instance, the first example I gave, writing something from the top of my head without any further thought, I might have gotten the impression that that might be the way to go forward as a scholar, because it got so, I mean, just one and a half day and so many positive comments. I should do that all the time. I can write 300 of those publications each year. I'm not certain that is a good idea for the quality of science. I'm quite certain it is not. So just to conclude, there are always tensions between how people perceive accountability and assistance we build. We need professional forms of accountability. They are suboptimal, I would say, and there are some additional types of rewards which are not formal forms of accountability, which may really affect how individual researchers feel they are accountable. Thank you. Great. Thank you so much. And we'll move on to the next panelist, Rachel and Kenny. Hi, thank you. I just want to start by acknowledging that I'm sitting on the Adelaide Plains, which is the traditional land of the Ghana people. And by tradition, we always acknowledge that the lands have never been seeded and pay my acknowledgements to elders past, present and future. I want to thank Samini for inviting me. Much like the other panelists, I didn't know much about meta science as such before. I knew a bit about what some people within the meta science movement were trying to do. I want to position what I'm about to say a bit, because I think this will help contextualize it. It will complement, hopefully, what some of the others have just spoken about, but it's quite different than it. So I'm a scholar who crosses a number of fields. I do history and philosophy of science, particularly biology and medicine, bioethics. I do quite a bit of empirical social science work. And increasingly, I do research on public engagement in science and technology. And I run a research cluster focused on that at my institution. The other point that's probably worth making is that my take on philosophy of science, which is going to be part of my focus and my comments, is very much what's called philosophy of science and practice. And that might sound really obvious, particularly to scientists. Science, of course, is practice. But at the time at which myself and some colleagues, particularly from the Netherlands and the UK, founded a new organization that focused on that slant going back 12, 13 years ago now. It was quite innovative because it was trying to force philosophers to think harder about what goes on in science and not just to look at sciences, products or theories, but to really look at the practices that make up science. And in this way, trying to bridge some of the fields that have already been under discussion, but in particular, some of the work that goes on and things like sociology of science or science and technology studies and adjacent fields, history of science with philosophy of science. And so my comments are very much against that kind of backdrop of thinking the best approach to thinking reflexively about what goes on in science is really to look at the practices of science. So with that kind of throat clearing introduction, I'm going to talk about accountability, but in a quite much more indirect way. I want to think about the moment that we're in right now, and in particular, the relationship between science and the public. I guess the punchline in some way is that I think accountability importantly has this component of not just internal accountability, but looking to what the expectations and understandings are in the public. And so if we think about where we're at right now, I think it's not unprecedented, but we're certainly in an interesting historical moment where there's lots more discussion and lots more seemingly awareness of science and its methods than ever before. This has been accompanied obviously also with more debate, more disagreement, and lots of misunderstandings or misperceptions. But I think this plays a scientist, including those who want to think about meta science in a bit of a dilemma in relation to how science is and should be portrayed publicly. So given the pressing issues that we're facing in society, I think there is a tendency to underplay the fact that science is not a linear process that inevitably proceeds towards truth, perhaps with a capital T or a singular correct answer. I'm sure that everyone who's part of all of this knows that science is a much more complex set of processes. It relies on a whole range of activities that generally produce the best available evidence at any point in time. But this evidence is often subsequently displaced, refined, or even abandoned. And so I think an important part of accountability and portraying the right understanding of science has to do with recognizing these patterns in science, and in particular the deep heterogeneity that crosses different fields and even within fields within science. I think this heterogeneity is absolutely critical to focus on and critical to make public facing and transparent. HPS, History and Philosophy of Science, Science and Technology Studies, give us a whole range of accounts of the diverse practices that occur in different fields and the diverse norms that accompany these processes. And I think it's absolutely critical for scientists to make themselves familiar with this kind of rich literature and not rely on kind of oversimplified discovery accounts or founder accounts about how science works. It's tempting. They're exciting. They seem to have answers. But it doesn't do justice to the intricacies of science, which are going to be absolutely critical if you're going to try to do any of this meta scientific work. I think the other point actually that I want to make relating to this echoes well with some of what Bart was saying a little while ago, is that scientists really need to think about the shared features of science that give it credibility, that make it authoritative. There's no doubt that science is in many ways unique and is a particularly robust form of knowledge production for certain kinds of goals. But I think there's need for more intellectual honesty, if you will, when you're thinking about these things. It's absolutely critical not to deny that just because it's science doesn't mean it can't be co-opted, that it isn't sometimes wrong. It's often inconclusive. And I think in this moment right now there's a real hesitance. There's a tendency to be scared of being open about that because it might get people offside. It might actually interfere with people's abilities to take best advice and so on. What I think is important to think about here is that value judgments are critically at the core of the doing of science. And to deny that is at best disingenuous and at worst dangerous, particularly when science is being applied or used to make policy. There's an extensive literature, particularly in the philosophy of science, that emphasizes the important roles of values in scientific practice, even when they're often invisible to practitioners themselves or may become invisible over time. So current debates, take the vaccine debates, but more generally over what constitutes a health risk, causes of climate change, and much more, highlight that things are factual questions on the one hand, but they typically involve ranges of different types of variables and decisions about trade-offs and their values. Recognizing this sort of fact doesn't make science less scientific, and I'll come back to what it might mean to be scientific. So long as there's a transparency about when and how values are entering the judgments. So then based on this, what do I think is the best way forward for scientists, particularly those who are interested in engaging in meta-scientific considerations, and particularly those who are looking in some way to have this sort of form of credibility in their activities. I think it's really essential to look at the mechanisms that guide science in its practice to make them more transparent, but in particular to look at the rationale that underlies those. And so this overlaps quite nicely with the previous two commentaries. I think it's important to look at the mechanisms that underlie science take your review rather than the data that's produced. That's what the public's interested in, are those sorts of checks and balances. So just to use an example, I think discussions, and not necessarily in this group, I must say, but more broadly, discussions about problems with reproducibility too often deteriorate into debates about falsification of data, sloppy science, us, them, blaming, and so on. But there's many other reasons as many of you are well aware about what failures of reproducibility occur. One of the fields that I've been doing research on for a very long time is the use of experimental organisms in different contexts. And for instance, the key organisms used in contemporary biological research tend to be rodents of various sorts. They're highly standardized. And this in turn would seem to eliminate most forms of your reproducibility unless there's really something quite dramatic. But what I want to say is, even if you look at this kind of case, which I think is at one end of the spectrum, part of the issue here is the assumptions that become encoded, that these are the right models to use to answer all types of questions, and tendencies to say that there's only one way to do experimental work. And this is becoming coded in funding bodies guidelines, for example. This kind of case shows us clearly that experiments can fail to be reproducible, reproducible for a range of reasons. The experimental animals are actually variable even when they're standardized, often in ways that are unforeseen or can be relevant to the questions. They're variable amongst themselves. They may be variable in relation to the assumed standards or the questions that are being asked. As with any experiment, even with highly standardized animals and conditions, there's still baselines and metrics. There's still assumptions that get encoded. Not always are these made explicit and may not be obvious to those beyond the community during the research or even in perhaps contrasting communities that are at odds. Experiments are always done within a context and so on. I think these types of issues are even messier in types of experiments or observational studies, where things like organisms are non-standardized. And that goes across lots of forms of biology beyond a narrow sort of band. So I think often our visions of science are about a very narrow band of types of work. And part of what we need to do to be more accountable to our representations of science as a whole is to widen that set of considerations in that band. To truly engage in meta-scientific considerations to improve science, all of this messiness needs to be much more explicit. There needs to be much more engagement around how to develop frameworks and processes that allow complexities to be discussed, critiqued in a rigorous manner that allows refinement improvement, but never gets rid of the complexity. So finally, what might these processes involve? In some way, this is a back-to-basics kind of thing, but I think it's probably appropriate. There's long-standing discussions of philosophy and history of science about what gives science its credibility and how science can be termed objective. I think these discussions are an objectivity are really helpful, particularly those that look at the decidedly social features of scientific processes that lead to knowledge or knowledge production. For example, the accounts around the social nature of science promulgated by the feminist philosopher Helen Langeau. As Langeau notes, data can never support one theory or hypothesis to the exclusion of all alternatives. There may be a best fit, but there's never going to be one fit or one perfect fit. You always have background assumptions in the mix. There'll always be assumptions for which there is no evidence once you dredge back through the chain of reasoning. The assumptions on which scientists rely, say the reliability of certain kinds of methods, are a historical product. Scientists and science itself has been a historical, geographically and socially situated product, and anything that goes on within it naturally reflects all of this background. This doesn't mean anything goes, and that's one of the reasons I really like Langeau's account. She argues the social nature of science is what gives knowledge that is produced using these methods a certain status, and that's in fact what we term objectivity, perhaps lower case O, rather than objective in sort of any grandiose sense. Langeau really argues for the need of community norms, and I think these are an important part of what it means to be accountable. Again, this overlaps with some of what my colleagues have said. There needs to be recognized venues for critical interactions, conferences, journals, but even things within labs like ground bag and lab groups and so on, open critique needs to be happening. But along with this, it's not just arguments, it's willingness to change views over time based on critical discourse. Political philosophers might think about this as much more deliberative processes, looking at arguments and the reasons for the arguments, rather than just arguing. There needs to be public accessibility of the standards that regulate these types of discourses, and finally assumptions of equality of intellectual authority, that's probably the toughest one. There's complications with all these requirements, but I think they help us get on the right track about the interaction between science and the public and the accountability part of the picture. So finally, I just want to note that a critical vision, a part of the vision for how science can be done in a manner that's rigorous and allows establishment of reliable knowledge is more involvement of the public. Again, I think in this moment, this kind of proposal is quite frightening. It seems to open up science to being hijacked in various ways. But accountability fundamentally needs to be grounded in making the standards that underlie scientific practices more transparent and open to criticism, both by those working in the relevant fields, those across fields who may or may not be in, you know, communication, but have contrasting or conflicting standards, but importantly to the public, to whom ultimately science is accountable. There's a variety of publics that can be involved in shaping science in the way that it's done. This can be done in a variety of ways, such as considering the benefits that might be generated, the risks and harms that might be associated, all the way through to more methods from the social sciences that are gaining popularity in the more traditional sciences, such as co-creation of scientific projects. These sorts of considerations can't take us in the direction merely of increasing scientific literacy or giving publics more information through unidirectional communication. Here's where the public understanding of science is really an essential field that needs to be in the mix. There needs to be a viewing of various publics as having their own forms of experience and even expertise. I do a lot of research on people's views on emerging technologies, particularly genetic technologies, and it's clear there's really fundamental different understandings of what counts as acceptable levels of risk, likely benefits, and so on. Those kinds of considerations need to be actively sought out and used to shape how scientists think about their research programs, so in turn they can be accountable to these diverse publics. One thing I often say to my science colleagues who I work with quite a lot is it probably doesn't make sense to make something or create something that many or most people aren't interested in using or having at this time or which they have grave ethical concerns about. Now I'm not suggesting the public should be the ultimate arbiter of what science gets pursued. There always needs to be some blue skies research, knowledge production, and so on. We can never ultimately predict what's going to come from these types of research, but it's critical to think about what projects and goals are pursued against the context of what will bring public and social benefit, and I believe this is a critical part of what it means to think meta-scientifically and also what will be necessary to enhance the accountability both of various scientific fields but of the meta-scientific movement itself. Thanks. Thank you so much. There's already so much food for thought there. Before we get to questions, briefly read the comments that Hilda Bastion sent through. She regrets that she can't be here and I also want to say I regret I forgot to acknowledge that I'm also on land that has not been ceded and I want to acknowledge the traditional owners. I'm in the place we currently call Sydney, but the traditional owners are the Gadigal people of the Eora Nation and I encourage everyone to think about how they can pay the rent for the land that they are on. I choose to do so through paytherent.com.au and I encourage you to consider that as well. I'll turn to Hilda's comments next, but I want to encourage everyone if you have questions to please go ahead and put them in the Q&A. I have many, so we're all set for the first part of the Q&A, but I really would like to also hear from the audience and get to your questions as well. What I'm going to read now is from Hilda Bastion. She mentioned that this is an early draft of what will eventually be a blog post, so there will be a blog post version of her thoughts on this topic. So here is what she wrote. Meta science isn't simply a field of science. There's a movement too. What's more, it's a movement field in part with a moral force, a drive to, quote, improve culture and conduct based on strong beliefs about what's right and why so much that nonconverts or the uninitiated do is wrong. Movements like that enter inherently dangerous territory. As playwright Arthur Miller wrote, nothing is as visionary or as blinding as moral indignation. The rise of charismatic visionaries is pretty much guaranteed. That always poses a threat. Scientists can be dazzled by charisma as much as anyone else. They can form echo chambers and develop us and them thinking in loyalties. Conflicts of interest arise too as soon as funding, projects and organizations enter the picture. That makes issues like accountability and skeptical inquiry critical, but it also makes them harder. For example, us and them thinking and movement camaraderie lead some people to see criticism as friendly fire that's out of line, leading them to keep criticisms to themselves or flexibly jumping into defensive attack mode when others criticize, especially if the critics are not on the team. I don't think the meta science movement is doing well enough on this. Although many practice what they preach, there are so many great exemplars, some quite conspicuously do not. There are several levels we have to consider. How we do our own research, how research is done on the issues we believe in and on our proposals for change, and how we go about advocacy. We focus more on the first of those, how we do our own research, and less on the others, but they all matter. Just because we believe something is good in theory and where idealistic doesn't mean our idea will have the effects we anticipate. Success of even the best laid plans isn't guaranteed, and anything that is powerful enough to have a positive impact can also have unintended effects. What's more, once you change the environment or other things change, new problems will emerge. Each solution tends to cause new problems down the line. If we want to be sure we make things better, we have to test our ideas and be critical about our own and each other's practices. We are, unfortunately, better at talking about biases and cataloging them than we are at controlling them in ourselves. As critical as cognitive de-biasing is, there's far too little consideration of how to do it or research into methods. The meta-science movement tends to focus more on research and analytical skills, which are vital, of course, but so are values, integrity, and cognitive skills, and we don't consider those enough. All the technical skills in the world can be totally undone by unacknowledged conflicts of interest, too. Given that context, what do we need more of? We need a broad approach to risk of bias in meta-science that encompasses our high risk of confirmation bias and ideological thinking and our intellectual and financial conflicts of interest. We need to be self-critical and value-independent evaluation of our beloved ideas and practices. We have to take our roles as critical peers seriously indeed. We need to take critics seriously and value them, even when they're antagonists, and we need to get very good at admitting when we're wrong. Okay, so that's the end of Hilda Bastion's comments. So we're going to turn now to the more panel-y part of the panel discussion, and so I'll start with the first question, at least, and we'll see if we get more from the audience. So one question I had for the panel. Actually, I'm going to start with one that I didn't send you all ahead of time, that just came up in several of the talks. And this is one I've grappled with a lot, which is a lot of the discussion, both within the sciences and from what I know of HPS and STS and fields of study science, is grappling with this issue of how much messiness and error and diversity in approaches is normal. And then when can we tell the difference between that versus something's gone off the rails? This is not a normal amount of error, a normal pattern of error. And I think that's something that meta scientists grapple with, especially, but it's relevant to all sciences, about being able to tell the distinction between a good, healthy amount of messiness and diversity and failures and dead ends and so on, versus a sign that maybe our systems and processes aren't good and we're not producing enough successes, too many failures, too many errors and so on. I'm curious if you all have insights from your respective disciplines on that. I think that this is not a quantitative question in the sense of how much of that error or how much of that failure is going to raise red flags at a certain point, but the quality or the type of, well, errors, failures, problems that we see, because if they all resemble one another, then there is an indication that there is something wrong in this system because it is apparently, and some unknown quality of that system is apparently pushing the practice into the same type of error or failure or problem. And that's regardless of the absolute numbers of what we see. So I think that what we're supposed to talk about here is not how much of it, but distributions, ratios, and the qualitative displays of what we see in terms of problems. And if it's all one thing like p-hacking, then obviously we need to take a good look at that, pay attention to that. But if it is all different and neatly distributed, then it doesn't provide concrete concerns or worries that we need to go sort of for systemic change. And we know that there are categories of these things that are overrepresented. And we know that because, at least in part, all of the traditions that have studied science are pointing at them, from the history of science and philosophy of science in the beginning all the way to meta science now pointing at certain issues that cannot be ignored. But none of those offer sort of a quantitative answer that can help us there. It's always about the quality, qualities, and characteristics of the errors, failures, potential misconduct or risks or conflicts or whatever, and never about the numbers. So I would agree with all that, but also flip it on. It's had a little bit. If there's too much consistency, that's almost always a sign there's something wrong. And I think too often we use, as our rule of thumb, what we see in published work. And forget that you're seeing the cleaned up final version. We all know there's a long history of not publishing negative results and so on and so on. And so the other part of the answer I'd give, I mean, I agree with what Bart was just saying, but I go further and also say it depends on what question you're asking and how much you've controlled the system. If you like, I tend to look at biological biomedical, there's a lot of noise in these systems, because it's also a function of the kinds of questions that we're answering and their natural systems. And so we should expect noise, if you want to call it that, we should expect variation because that's how natural systems work. If we're looking at other sorts of systems, perhaps we would expect noise but in different parts of the system than we would. So I think there's no one metric that's going to get very far. And the devil's really going to be in the details of what the community has agreed are sort of the baselines for the different parts of the experimental setup. And that's where you would start to look where you're seeing differences is at how much there is either a tacit or explicit agreement or whether these variations can be explained through variations in things that have not been made explicit or rely on some sort of variability that may be invisible because of the way the publications work or the protocols have been set up or whatever else. But certainly in biology, you're going to see a lot of that kind of variation. I'm sure there's kind of a spectrum probably that you could draw out. Great, thanks. Yeah, I have more questions in that vein, but I think we should get to actually the question in the Q&A by Brian Nozick, which was one of the questions we had discussed earlier too, which is what are the special responsibilities and maybe challenges or risks of meta science in particular when it comes to accountability and self skepticism. Do you think that there's anything especially problematic or especially a special responsibility that meta science as a field has compared to other sciences? Yeah, Thomas. Well, perhaps a few thoughts here. One thing would be if I understand meta science and studying science or study studying fellow scientists, it means that all your colleagues become your respondents or your participants or however you call them. Well, at least in the social sciences, we have pretty clear understandings of that you have to be quite careful with how you treat the people who participate in your research. We do all these things. If they participate voluntarily, we make some agreements, but even when they do not participate voluntarily, but you observe them, you do all sorts of things to bring out the knowledge without harming the people. I don't know. I mean, my knowledge of meta science doesn't suffice here, but I would say the same type of principle would also apply here somehow. You want to further our insights. You want to further our knowledge or the science of science, but in principle, we should be careful how we treat the fellow scientists in a sense just to on an individual level. Additionally, and here I build on the work on accountability by Phil Tetlock, who is probably the psychologist who's done most work on accountability, and it's quite brilliant work, but he has this theoretical paper on what he calls a social contingency model. And there he explains that if it is relational, we sort of naturally get into some sort of role. So if I'm being held accountable, I drift into what he calls the intuitive politician role. So as an intuitive politician, if you start asking critical questions, I will try to appease you, irrespective of whether or not I agree with your statements or questions. Beyond everything else, it's a very natural urge to stay on good terms as the person who is being scrutinized. The other part is the scrutinizer evolves into the role of, say, the intuitive prosecutor. That's the role you take on like, and you can see that immediately also if you have roles in your own work. Once you act as a head of department, if you have a role like that, or once you are the chair of this meeting, you will start asking different questions, behave differently, because it comes with a role. And that is for accountability, real purpose, a real risk here for metascience as a form of, in a sense, holding your fellow colleagues accountable for the quality of the research, because you're almost naturally evolving to this sort of intuitive prosecutor role. So for these two reasons, I said in principle, I mean, I don't know the practice enough, but in principle, there are two real risks. One of them is do you protect your sources well enough in an attempt to get out of knowledge, because you need to get out of knowledge, but you would want to protect our sources as much as possible. And the other one around, there is a sort of a natural, intuitive human logic to evolve into a prosecutor where you become foreseeking, where you are looking, right? I mean, there's a whole, the thing you see, the crime fighter thing you see on television shows, right? You become something like that quite naturally, and it sort of creeps up in you. So that is something, I mean, as a community, you should be careful for that, and also perhaps try to find ways to control for that, because that might be harmful. That goes with something I was going to say. I think, I mean, one of the biggest things is that I think this could very quickly tumble into policing, you know, and policing each other. And I don't take it that that's what's at the core of wanting to improve science. Wanting to improve science needs to engage, I think, with some of this critical, reflective kind of behaviors that I was talking about in my talk, which is really difficult. That's the hard part, right, is to actually develop. And I would, I would echo some of what I think what I really liked about Hilda's comments was talking about, not just talking about doing it, but thinking about where in practices, where in the everyday practices of science, and perhaps not the everyday practices, you know, needing new kinds of venues or new kinds of approaches, to being able to develop these kinds of critiques that then are going to result in improving the doing of science. It's much harder to do that than it is to police each other. And that's where I think, you know, the meta science community needs to really be a community and come up with shared values, and also come up with some of these shared mechanisms that I think Hilda's, you know, pointing to in her comments in order to really start to advance beyond kind of pointing out problems, if you will. If I may add one thing to that, that has to do with the diversity of perspectives, because there's not one way to do science. And there are many, and also many new and evolving ways of doing research across disciplines, schools of thought within disciplines. And that whole sort of cartography of ways of doing science is not fully represented in the meta science movement. Meta science is not sort of, well, I may have thought that all the way in the beginning, imposing one idea of what science is supposed to be, that's not the case. There is room for multiple perspectives for plurality, and that is of course a good thing. But it's not the same display of diversity as science itself. So there is still some form of restriction being imposed by the ideas of what science and scholarship and research is supposed to be through the meta science community on a community that is more diverse than itself. And I would identify that as a potential risk, first awareness of that, and then how do you deal with that? How do you study, or how do you embody or at least attempt to embody that diversity in yourself as well, or at least to justice to the diversity that you try to map and display? Great, thanks so much for your thoughts on that. Another question I had that was kind of related to this. So I think, you know, this was echoed in all of your comments, and maybe the main thrust of Hilda's comments is this idea that we need to really hold ourselves to high standard and be very self skeptical and not just talk the talk, but actually walk the walk. And so one question I had is that meta science, like any other field that's struggling for, you know, recognition in a place at the table and so on, funding, hiring lines, etc. has to find this balance between self-capticism and humility, but being competitive for funding and seats at the table and so on. And I think among many researchers, there's a fear that sometimes expressions of humility can be used against them in their field or as an argument for not taking them seriously, especially for maybe a fledgling field. So what recommendations would you have for meta science as it's trying to do both, right, to remain self skeptical and keep itself in check, but also by for funding by for institutionalization in various ways? Well, I mean, a bit broader thought, I'm not certain that's fully the response to your question, but my question is, would meta science wants to be like a specific field alongside all of the other disciplines, or wouldn't it be wouldn't it be better perhaps to integrate into all fields, right? Because what you do is incredibly important. And it would be a little bit, I mean, a meta science community with meta scientists writing for other meta scientists, I don't think that is the idea of it at all. So I don't know, substantively, but also perhaps also strategically, it might be much better to relate and infiltrate, so to speak, I'm not certain that's the right word. All the other fields and be part of research projects there as part of say quality control and quality improvement also. And that might also perhaps be the second thing. Sometimes I was googling a little bit on it. Sometimes there is an impression of negativity bias in meta science. I don't know if that is correct. I have no opportunity to see that. But in principle, you contribute to the knowledge of what is good science, but it's a very positive thing, which we all should adhere to. So that's, so that sense, I mean, that would be my intuition, but it is not a full answer to your question, but it is at least related to it. I think there can be humility or I would call it more sort of realism about science and its limits. And the complexity is the science, a lot of what I talked about, with still being robust about the usefulness of the critique and what's being done, if you see what I mean. And I think that's where, you know, to admit that there are problems in science is not to be not humble, it's just to be realistic. Or it's not to be humble, it's to be realistic, it's to be accurate. And I think that in turn gives anybody making those kinds of critiques call it meta science or whatever you like, more credibility, because it's actually more true to what's actually going on. Look, I think, you know, anybody Australian, I mean, so many might know this expression, we call it rent seeking, anyone who's rent seeking is never going to get a place at the table if they're seen to be rent seeking. The value of whatever it is, even if it isn't a field or a field comes from showing that it actually has good outcomes. And that's where you're going to get traction. Yeah, I just wanted to say all the way in the beginning, that humility is not a weakness and displaying it isn't. And if your claim is that we should expect less of science, but you can show so very rigorously, then that is not a display of humility. But indeed a contribution to how we all see that big institution and what type of power, credit, authority, status, legitimacy, we can responsibly subscribe to it instead of just following along. Great, thanks so much. We have another question from the audience, this one from Severola Costa, and I'm going to paraphrase a bit here, but her question has to do with the fact that meta science, at least as some define it, has this unique characteristic that distinguishes it from other science of science fields, that is the component of activism or advocacy, in addition to doing research on research. And she asks, does this create unique challenges? And I'm curious about your perspectives. Surely, it's not the only field that has this component. And so what are some unique challenges that face fields where there's both a research component, but also an activism or advocacy component? So I mean, the clearest example that's right in this domain is traditional science and technology studies, you know, was heavily oriented to picking problems that were thought to be important for society, you know, to solve and things that were in the space of being conflicted, you know, controversy studies was another way that a lot of STS operated. And so the activism tendency was a strong part of STS. And I would say, you know, the short history is some of the best work was done and some of the worst work was done in that vein. Right. Bart's giggling. But yeah, I think I mean, some of the most classic work in some way is because you can take your point of view. And really, as an activist or at least as someone who's trying to be an advocate for making change, and that driving the study gives it a unique kind of perspective. So for example, that was super useful on things about lay expertise, I would say, where it runs afoul is where the scholar as advocate, you know, isn't really weighing up the evidence, perhaps as well as he or she might, because of their advocacy position. And so some of the dangers of the challenges are, I think in this particular zone, almost the opposite of STS. In STS, it was being too critical of science. And here it's being scientist is what we would say is being overly apologist for science, because of the desire to give it greater credibility. I actually think credibility and transparency are in some way perhaps intention at times. But I also think, you know, being a cheerleader for science is often highly problematic and results in kind of the research on research practices, not being disengaged enough, not being critical enough, not being able to step outside and see that broader context that I think is really essential to really being able to position and understand science, you know, with all its bumps and warts. And so here I think, I mean, the challenge could be addressed by having a lot more attention to point of view and perspective, you know, using philosophy, other other ways of thinking at how point of view shapes one's epistemology and way of knowing. And there's a lot of literature around that. And that would also help some of the things we were talking about earlier, about saying increasing diversity and so on is to realize what voices aren't being heard, and to always, always caution again, simply, yeah, being an apologist or cheerleader for science. Yeah. Oh, sorry, Thomas, you raise your finger first. You started talking first. Oh, yeah. Following up on that, what Rachel referred to the origins of STS and in a way, there have always been multiple STSs. Even the abbreviation has meant multiple things in parallel simultaneously. And so there's science and technology studies, but there was also science, technology and society, which is the same abbreviation, but it actually refers to a different community with Marxist roots, really trying as an advocacy group or also like meta science and advocacy group, but with a different agenda. And obviously, asking different questions, interrogating science and technology differently with different ulterior motives and goals. And I think that that sort of on that higher level, what sort of research agenda is set by the political movement that you are are heavily shapes, ultimately, where you might even end up. And so, for instance, science, technology and society movements started interrogating the roles of companies and for-profit science and zoomed in on those issues in depth, often high quality scholarship, as Rachel said, also sometimes not so much. But at least it sort of shapes an area in which they were active, but less so elsewhere. And that is a big risk also for meta science informed bias agenda and also by the tools it has that it focuses on, yeah, for lack of a better word, sort of policing individual studies, because it can do that really well. And neglects the bigger questions on the goals and relationships that scientific communities have with potential publics and audiences, because it is both as a movement, but also informed by the tools that are currently dominant in the community, not so well equipped to do that. And the office's answer to sort of remedy that is to reach out to those who can and who have been doing that for a long time already. It's good that you went first, because this was a much better way to follow up on Rachel's first points. I think mine would be a little bit or slightly different in the sense that what I would see as a risk, I would focus on the second part, the activism, trying to attain better research overall. In my terms, I would say that makes the meta scientist to some extent the account holders of all other scholars, as the account givers who somehow have to relate to what meta science is trying to, well, police or enforce or show or whatever word you use, what would be good science. And I think there is a particular risk there in terms of will you be able to actually, well, or put the other way around, would individual scholars actually feel accountable in say a positive way towards meta science as a activist group? From what I know from very different fields, it would be imperative that at least it is say in a sense predictable. It would be imperative that also the legitimacy of meta science as a as an activist community would be accepted by those scholars who are held to account basically every one of us, even a meta scientist themselves, I would say that would be the meta meta science, but that's more difficult even. But would it be seen as legitimate? Would say the accountability in that sense be expectable? And would it also be acceptable in terms of the standards being set there and there with methodological diversity, it's very difficult to get at that point. So I think that again, as in a sense, a field of its own holding in a sense, every one of us accountable, that would be risky in a sense in terms of being seen as legitimate and authoritative by everyone else. You might get into sort of the thing you get when you ask a sports trainer afterwards whether or not or why he or she made a specific decision as a journalist. And the sports trainer will say, well, you don't know anything about that because you're a journalist and I'm the coach here, I know these things, right? You get to do something like that. So being legitimate, if you are a field on your own might might be difficult. Long answer. Yeah, great. Those are all really interesting different angles on on this question. That's great to hear all your views. I had a follow question to Rachel's point, which came up both in your answer here and in your opening comments about the importance of acknowledging when and how values influence us. I think this is especially relevant for scientists that also have an advocacy agenda. And I'm curious what our expectations should be about how self-aware and self-disclosing scientists are about this. Do they know when their values are influencing them or when their activist goals are perhaps introducing bias and so on? And I suspect that part of the answer is the ideas of social epistemology and Helen Lange knows work about critical discourse that where we help identify blind spots in each other's work even so it might be hard to see our own. And I'm curious, yeah, just to hear you talk a little bit more for those of us, at least me, who aren't super familiar. I've read some of the work and I still grapple a lot with beyond positionality statements or things I would consider self-reports from psychological jargon. But so how do we get beyond the limits of how much people can actually self-report on their own, the influence of their values and potential biases in blind spots? So I think the other thing is that it's not just individual values. I think often scientists know about that. I mean, they're aware. You know, you pick certain topics to research because you often have a personal interest in them or some history or some, you know, story behind it. You probably can talk a bit if you were asked. But part of the point is what Hilda's saying is is that unless you're asked, it's not in front of mind. And so I'm not sort of suggesting, and I don't think anytime soon we're going to see, you know, every single article prefaced with a reflexive statement the way you would in an anthropology journal. But that that thinking at least needs to be in the mix. What I actually think is even more important, though, are the communal values that go unexplored and unexposed. The things that you all agree on, but you don't even really realize that you've smuggled value judgments in there. Now we often do when we look back historically and we say how could they have ever thought that, you know, the favorite example for people, you know, history of medicine is, you know, how could they have ever thought that drapedomania was a disease? It's just completely crazy. Drapedomania was slaves running away, you know, that they had this mania and they really felt like they had to run away. And this was obviously something wrong in their psychology. Because when there were slaves, that made sense. And we think that's absurd now. But what are our blind spots now in how we define things, how we understand things, what we study, what we don't study, I think the absences are often as important as the presences when it comes to unearthing values. And where I think the more interesting thing is the communal values. So that's where it's super helpful to get people who aren't part of your field or come from an opposing point of view. Even maybe if they're in your field, they come from, you know, a lab that deals with the categories in a very different way or whatever else. Or giving, you know, graduate students and postdocs the freedom to actually ask the stupid questions about why the field works in the way it does. Or even more extreme, you know, the public or publics and co-creation and so on, some of the things, you know, that Hilda sort of done the bringing in the voices from patients, patient groups, other things that help you kind of unearth some of those values that otherwise I think just sort of go by because it's just how you've always done things without even really, you might have thought about it when you're first trained but you certainly don't think about it anymore. For me, as a philosopher of science, it's the concepts I often think are a really good way in. And thinking about why you use the concepts, the categories, the organizational kind of frameworks that you do is as important as the practices and the methods, but, you know, they're all interrelated obviously. Does that help, Simone? Yeah, no, that's really helpful. I think, yeah, I mean, I think the kind of reflexivity and positionality statements have a role, but I think maybe it's the conversations that I'm a part of it often only goes that far. And so I think it's really helpful to think about the deeper layers that are a lot harder to uncover than the values and biases that we're aware of. We always ask our students in a project that we initiated a couple of years ago as they are socialized into laboratories when they first do their internships and first are sort of present in this research community for a longer time in a row, then they're essentially socialized into this community and learn about these norms and values are introduced to them. And that's actually the moment that they become visible to these students. And afterwards, they're no longer visible because they've been internalized and then you never talk about them again. So we ask them in this internship to document these encounters with new and unknown norms and values to them in a little diary. And then at the end of their internship, we sit together and we talk about all these norms and all these values. And all the way at the end, they already do not completely fully recognize anymore what they wrote in the beginning in this period of six months time already things can become invisible to you. And those are all communal types of norms and values that Rachel referred to. And it can go that quickly. And if you're in a field for 20 years or 30 years or even more, then you can rest assured that most of that is completely invisible to you. And you need help to find it. And you don't need sort of professional help. So we're following around with some techniques. We do that with students part, but be interested if you've used techniques to try and get practicing scientists to do that kind of reflexive, reflective, whatever we want to say, work. I think it's much more difficult. So we're starting to use diorizing techniques and trying to use prompts that are coming from kind of that left field, if you will, to try to get back to that rethinking of not just values, but more generally assumptions in a field. Have you ever tried to do something like that with practicing scientists? Yeah, not in a similar style, because this is also heavily institutionalized as part of a curriculum. So we have hundreds of people doing that, which is really interesting to see and observe and also a scale that you could never sort of reach in a research setting. But no, I would love to though, would be a great meta science project in some way or another. Yeah, I can report back in another year or two. Great. Yeah, that would be great to hear how that goes. So we have another question from the audience. I'll read that one out by Rafael Rocha. Science is increasingly demanded by the needs of the capital accumulation process and is transformed into an industry to meet these demands, but relatively less able to deal with the complex global problems that arise with the development of science. Capitalism, which brings out the effects of the crisis of capital. How can this framework impact the quality of science? Can we fix it without fixing the problems that underlie it? I don't want to tackle that one. It's huge, but I mean my intuition is to say one of the ways to at least get into it is to look at this diversity in different kinds of settings and ways in which people use science in diverse ways and to not always use the very quite canalized, quite predictable science that occurs in highly developed countries with this kind of pattern, but to look at the ways in which science is used on the ground in other diverse kinds of settings and how there can be very high quality science that isn't necessarily driven by industry or aligned by industry needs and that the practice of that kind of science might look quite different, particularly on the ground. So I know a lot of people look at research in countries with less infrastructure for science and see how those practices can occur and still achieve high quality outcomes, even if it doesn't look like the kind of thing that we would consider to be the most cutting-edge science in our own country. How you can fix the system, I mean that's just the perennial question about anything. You know, you're working on this little corner, but the system's broken. I don't think that there's anything specific to this domain that gets to that, but I do think if you're truly going to be kind of richly metascientific, you have to think about really even questioning who's science and for whom and in what way and not always assume the norms and patterns that go with the most expensive equipment, for example. You know, that's where labs that might not have that might be doing incredibly interesting work, but not getting published because they're not using the most up-to-date equipment and so on, but it's important work for agriculture in their area or health in their area or the real on the ground problems. So yeah, I guess it almost goes back to, you know, some of the earlier questions to truly be engaging, you're going to need to be putting that into the mix too. Great. Yeah, another question I had, it's also getting back to one of some of the earlier themes about the unique challenges of metascience. I think likely one of the unique challenges, maybe not specific just to metascience, but that's very salient to metascience is that our work can easily be co-opted by people or groups with an anti-science agenda or an agenda that is to undermine or spread misinformation. And so I'm curious what responsibilities you think metascientists have or any recommendations you have for what metascientists should do to avoid having their work used in the service of misinformation or anti-science agendas. Well, there's a whole, oh sorry, I'm doing it again, but this time, Thomas, you get to go first. All right, thanks. My comment is not very big here, but I think, I mean, this is this is always a problem, right? You are a potential problem. You produce research, people will use it. So that is, in a sense, it's actually what you want, but you can't control what happens to them. I would say to some extent, sometimes this type of use might be better than, for instance, when your rocket scientists have been working in the 1940s, right on the bump, right? So in that sense, this is just part of the deal. I used to work for an advisory body to the Dutch government. We also produced advice and reports. They were always asked for. And after a little bit of time, I found out, well, we are not asked for a question because they want to know something, but we've asked for something because they want to do something. So you rather ask to legitimize something instead of like, like an open-ended question. So over time, and I've been also doing work for and with government since then, I try to be self-conscious about how my work can be used or how our work can be used and try to at least cut off some of the uses that I find inadvisable. So sometimes there might be interpretations of your work, which you believe are problematic for whatever reason. In my case, it's often about government policy. People might take conclusions from the research, which you don't think are supported. For instance, you do not find something. It does not mean that the policy needs to be abolished or the other way around. So I'm always pretty certain to make these disclaimers. People tend not to like that, but it tends to be pretty effective simply saying, well, this does not mean that whatever it is. So one thing, it's a tragic life. We do research and people will use it also to purposes that we don't like. It's pity, but we can try as researchers to at least cut out some of those inroads and perhaps finally be a little bit critical with whom we cooperate. That also actually goes very much also for governments. It's not particular for this sort of anti-knowledge coalitions or whatever. Well, sadly, those aren't necessarily mutually exclusive. Go ahead, Mark. Yeah, I just wanted to point sort of to the experience of other fields studying science with this exact problem. And quite recently, well, not necessarily quite recently, over the last decades, we've seen lots of appeals to things that have been grouped into the post-modern ideas about science as claims to deny the value legitimacy or worth or explanatory power of science in service of certain political agendas. And that last bit is important. It's always in service of a very often quite overt political agenda. And one of the things that is sort of the minimum to do is to tease apart whatever sort of claims drawn about the quality or legitimacy of research when produced in academia in HPS or in STS or in metascience and the political agenda that is using them. That is not necessarily always very easy to tease them apart because they tend to be interwoven across multiple levels. But to try to tease them apart is at least showing who is trying to co-opt claims that might not be particularly suited to be used that way. And often, it also helps to demonstrate that the same academic claims or the same metascientific claims or sociology of science claims can also be helped to bolster the credibility of research. And if that is the case, and if you can show both angles there, then again, that helps in teasing apart the political agenda that is possibly conflating paradoxical claims about the value of research with whatever it is that we are trying to do together in our respective communities, but also as a much larger group, the group that concerns itself with studying, understanding, helping and promoting quality of science. Great, thank you. In the last minute or two we have left, I want to just leave it open for any of you. If you have any final remarks, maybe 30 seconds or less need to take home messages for the audience. Well, maybe when we started out with accountability and then drifted slightly away from that through the session, never, never fully, never completely. But I think an important key message is that accountability in itself, including all sorts of procedures that deal and facilitate accountability, is not necessarily something to be afraid of or to try and resist or shape to your own desires. Because whether you're a metascientist, a philosopher, historian, sociologist, or in public administration, there is a very legitimate reason to give account of what you do and also to be held accountable. Ideally, we would find a way to do that constructively together in a way that actually adds value to all of the things that we do and prevent it from eroding the content of what we're trying to achieve. I guess I want to say something maybe a bit provocative, but I think you really need to think about whether you need a field. I mean, I think you think you already have a field, but there's so many fields that think they're doing these things. And what I think is really, really unique here is that in effect, this is about doing good science. So why does it need to be meta science? You're going to have meta science, you're going to have meta of meta science. It just it gets sort of self-reaffirming and it gets more esoteric. I mean, I think at the heart of this, the key is doing good science. And that that should be embedded kind of in the ways like, I loved the example Bart gave about what they do with the students. These are the sorts of things that this group could really try and drive, not as an add-on, but as an essential part of what it means to do and train people to do science. So that's not quite what this panel was about, but I think we ended up getting there in some way through some of the suggestions that we were giving. No, it's definitely in the spirit of panel. Definitely great food for that. And Thomas, very quickly. Very quickly. I would be interested to think, I mean, we've also talked a little bit about the role of citizens for meta science and any sort of scrutiny or be a little bit concerned about the negativity bias that you could easily get. But there's great examples like National Bert counting day, but some of my colleagues are now organizing a panel of successful stories about governments, because actually a lot of policies actually work quite well. But if you if you look at the public discussion, it seems everything is only problematic. But they engage in some sort of public event with, well, some things sometimes really work out quite well. That is interesting. I can imagine something like this, some public engagement, also understanding great practices, fantastic practices of research might also be relevant both for the legitimacy of the field, whether or not it is a field is another thing, but also for the legitimacy broader period. Thank you, Mary. I don't want to take time away from the next session. So thank you so much to the panelists and to Hill de Bastion, who couldn't be here. I really appreciate all of your thoughts and comments. This has been a really great session. Thank you so much. Thanks for sharing.