 So good afternoon, everyone. Welcome to this webinar on the use of AI in scientific publishing, which is a very hot topic at the moment so I'm sure we have a very interesting session today. My name is Eduardo Carlos Alves. I'm each use editorial manager. And our speaker today is Dr. Sam Ealingworth, who is an associate professor at Edinburgh Napier University, where his research involves using poetry games and generative AI to help develop dialogues between scientists and non-scientists and to improve staff and the student belonging in higher education. Sam is also executive editor of Geoscience Communication, which is one of the EU journals. So before I hand over to Sam, I just want to remind you to follow our code of conduct. We will send the link in the chat. I also want to remind you that this session is being recorded and will be made available on each use YouTube channel in about a week's time. Please send your questions using the Q&A box and we will make sure to answer at least some of them after the presentation. Welcome Sam and I would like to invite you to give your presentation. Thank you very much Eduardo for that invitation and thanks everyone for coming along today. So as Eduardo pointed out, please do feel free to put your questions into the question and answer function and we'll pick them up at the end. I'm probably going to speak for about 30 minutes, something like that. And I want to make sure there's lots of times for questions and answers as this is a hot topic, but it's also rapidly evolving as well. Just to say as well if anyone wants to follow up with me in my email address is on the slide. So just to start with, Eduardo gave me a very nice introduction there, but a little bit about my own positionality just in terms of where I sit in terms of commenting on this. So I'm an associate professor at Edinburgh Napier University. My work in research involves using poetry and games and also generative AI as a way of developing dialogues between different communities, scientists and non scientists, and then also a more meta level in terms of staff and student belonging within higher education institutes. I'm also the chief executive editor of Geoscience Communication, which is one of the EGU journals. And I am also the founder and editor in chief of Consilience, which is the world's first science and poetry journal. So I've got a lot of experience in terms of editorial and also scientific and creative thought, and I'm obviously involved with EGU as well. So this webinar really is an introduction to some of the challenges, opportunities and developments of AI, both on scientific publishing and communications more generally. Not text, but hopefully I'll be able to present it in an engaging way. And like I say, it's, there's a lot of space at the end for questions because it is a rapidly evolving field. Hopefully by the end of this webinar, you've got a better idea as to the challenges and opportunities that AI presents to science and scientific publishing and science communication. So a little bit of a better understanding in terms of EGU's official stance on this. I've got some links in there that we'll also share in the chat. So I just want to start off by talking about impact on scientific process and governance. So one of the biggest things we need to worry about is plagiarism. So as I've written that there's a heightened risk of plagiarism with AI's ability to synthesize and rephrase existing content necessitating advanced plagiarism detection and attribution verification and scientific communications. What do I mean by this? Well, we all know that whenever we submit our work to a journal, or even when we're doing science in the first place, we shouldn't plagiarize. We should cite and we should reference and we should build on the work of others, but we should do so in a voice that is original and that is ours. Now, from a scientific publishing point of view, when we come to look at work that is submitted to a journal, we tend to get a similarity report that tells us the likelihood that something has been plagiarized. Obviously there's a degree of interpretation in that. And one of the issues that we have with genitive AI such as chat GPT such as Claude, etc. is that we can't always tell. Now with written words is sometimes a little bit easier to tell when something's been plagiarized or not, but with creating especially images or figures. It's very difficult, stroke impossible to tell where genitive AI has used somebody's work without giving it appropriate credit. So basically we need to be very, very careful with understanding or come on to isn't a little bit how these models are using work and the extent to which we can map account for or allow plagiarism to take place. So to consider and this is again in terms of broad scientific process and governance is that there's implicit bias so AI systems may harbor implicit biases from training data, which can screw the representation of research findings and influence citation practices and what do I mean by this. Well, a lot of genitive AI is written and developed by a not entirely diverse group of people predominantly in the West, and a lot of genitive AI is trained on the internet which again is predominantly in the West and tends to be very biased towards certain communities, which isn't, and which aren't diverse and representative of global narratives. So if, which is what we're trying to be doing in the Geosciences, if we're interested in diversifying science and scientific research one of the ways that we can do this is to make sure that we're looking to authors that come from all over not just the global south and not just specific countries but they are truly diverse. However, one of the problems with many of the large language models that make up genitive AI is that they have biases in their training data, many of which are slightly opaque. That means that some of these biases come through. So we need to be very, very careful when we're using these models that they don't continue to propagate a lack of diversity that has stymied and stifled scientific development in recent years. There needs to be a governance in data sharing. So as I've written here, the role of AI and scientific publishing calls for a robust governance framework to ensure the responsibility sharing and use of data. These just don't exist at the moment. So we need to better understand how we can share data, how data is being used, how data is being used by these models but how data is being used by us as scientists as well. To what extent should or shouldn't we be uploading other people's research into large language models, for example. Many of us use things like chat GPT or chat PDF as ways of getting summaries of research but to what extent is that actually using data and sharing data for which there is no governance. So this is again something that we need to really think about. And really importantly is demand for transparent AI models. So, at the moment, most, if not all of the large language models and genitive by that people use our opaque their black boxes we don't really know what goes into them we don't really know how they used. So what we need to do is, in my opinion, many others we need to work with a lot of these tech companies to better understand exactly how they're created, so that there's clear transparency and accountability as well. And then building on that ethical standards for AI use. There's been a little bit of work and research in this but there needs to be much clearer frameworks for ethical guidelines for AI in scientific communications science bit publishing and science more generally. We're not talking about legislation that is, you know, overtly punishing or tells people they have to do certain things but rather thinking about flexible frameworks that really think about how we can use AI in a way that is ethical. And maybe also effective as well. So these are general impacts on scientific processes in governance that generative AI and large language models are having, and that we need to think about but what about the impacts that they're having on practice so in terms of the positive impacts, it's really important that we don't get caught up in the hyperbole of AI is going to remove all jobs and destroy the world order is just not going to work like that. And there's many, many positive impacts that it can have so one of them is enhanced language editing. So, as I've written here AI can assist in refining the language of scientific manuscripts making them more comprehensible and helping non native English speakers to communicate their research more effectively so not only can we make as native English speakers, can we use AI to make our manuscripts and work more accessible to non native English speakers. We can also if we are non native English speakers we can use genitive AI to help us in that process as well. This is something that's really beneficial not just for the authors and the readers, but also for the editors and the reviewers and the publishers as well. Again, for many reasons which we might not agree with English tends to be the lingua franca of the scientific world and for people that don't have access to language and training or people that are able to provide that copywriting facility which can be sometimes quite expensive, AI is potentially a very, very inclusive way of accessing that. Improved accessibility so AI technologies can make scientific content more accessible to a broader audience, including those with disabilities, one of the ways it can do this is by providing automated summarization and easy to understand explanations and can also really help to frame research in a way that's maybe more understood by different audiences in different public so one thing you might want to do that's really interesting is working with let's say chat GPT given it the abstract for a research paper and asking them to summarize this work for a specific audience for example please could you write this to communicate it to a group of policymakers in Thailand or please could you write it to communicate it to a group of five year olds in Lagos. It's a really great way of being able to do that and cost effective and also accessible. As an early career scientists. I think it's really really helpful here so thinking about my own experiences, sadly a long time ago now as an early career scientists it can be overwhelming when you first publish in a manuscript like what do I need to do. What's the processes, how do I write a letter to an editor, all of these things, it can actually really help us with the logistics of that process now I'm not saying that at all. In fact, I'm very, very categorically saying that we should not be using to write our research papers, but what we can do is we can use it to help us with some of those additional tasks so for example thinking about spell checking thinking about grammar checking thinking about drafting a letter to an editor, thinking about you know even just asking it on a personal level I use chat GPT a lot to ask questions into interrogate it in terms of what does publishing landscape look like what kind of steps might I expect in this journey. So it's a really good way to help to provide support for early career scientists. It can also help to stream lines and publish in logistics. So again we would never want to do this just automatically with no checks but sometimes it can do it can help about tracking formatting compliance checks as well, potentially lead into a more efficient publication process. Some negative impacts though well proliferation of misinformation so I could inadvertently facilitate the spread of misinformation if the algorithms generating your creating content and not check for accuracy. We've seen this already quite substantially I mean we can look at the really need to look at what's happening with some of the misuse of AI with regards to the elections that happen in the United States this year. There's a real danger that AI is being used for certain platforms to misrepresent science we can see it being used by some climate change dyes we can see it being used by some anti vaxxers, and it's really important that we think about how AI is being used. Again, for us as scientists and as researchers, yes use AI but don't just take it as read. I mean, it's a tool, whatever it creates, and whatever it presents, we should be using a critical skill thinking skills and our skills as scientists to interrogate that and not just automatically take it as face value. Copyright and plagiarism issues so I touched on this earlier. It's really difficult from not just an ethical standpoint but from a legal standpoint as well, especially with the creation of figures and more creative elements as well to understand exactly what has been plagiarized and what hasn't been plagiarized as well. In terms of paraphrasing in terms of use, it's really important that we think about the limitations of AI and again that we don't just automatically go to it, but rather understand and try to break down as I come to a bit more what those models look like and how they're using the data. Skill erosion among scientists, so I think this is really important. I touched on earlier one of the benefits is that it can help only career scientists, but if we're only using AI to do everything, then we actually are unlearning or maybe not learning how to do some of the skills themselves. So for example we could use AI to do a systematic literature review for us, but actually doing a systematic literature review uses an incredible amount of useful skills and helps us to develop those skills in the process. So again, it's about using AI I think as a tool without using it as a way of replacing everything that we've got as well. So we need to very carefully think about that as well. In terms of skill erosion among scientists use it but don't use it at the replacement of everything else. Lack of transparency in AI decision making again talked about this but the black box nature of some if not most AI systems can obscure the rational behind citizens eroding trust and scientific findings. So as well as the fact that we don't really know what's going on and the fact that it can proliferate those misconceptions and also those biases. The danger that if, if we overly use AI for our research and then people are like well. Can we trust that like what is that what is that bot or that large language model actually doing so again there's a framing there's an optics issue there that we need to be aware of. That's in scientific narratives. I mean, I've put quite plainly there that if AI is trained on data reflecting white male Western perspectives I say this is a white Western man. Could it actually reinforce these biases in scientific publishing marginalizing the groups and perspectives. It does spend a lot of time trying to help diversify the geosciences and to platform the voices of other marginalized communities, but actually is AI compatible with that and if not then what can we do to make it so. So where are the gaps and potential risks for AI on scientific publishing and communications. So there really isn't a comprehensive ethics framework for AI applications and it's something that we need to develop both in terms of scientific publishing and communication but also just science more generally. So as well as being potential risks is to potential opportunities. I think the opportunity to create a framework is a large piece of work, but it would really help us to better understand what those processes are and to make links between different communities as well. So AI is role in generating and disseminating content raises complex issues, less so with the text and much more so I think in terms of images and figures and building on people's work without giving it fair attribution. So it can be maybe that AI is paraphrasing people's work without giving it credit, which is again something we need to be very careful of, widening the digital device this is something that I don't think it's talked about enough. So the digital device is this idea that some people have access to high speed internet, lots of software, lots of hardware and other people don't and those people to some extent or to a large extent are a huge disadvantage. We saw this exasperated largely in the COVID climate and so where we see for example people working from home people being schooled at home in the UK at least there was a huge issue with many school children and from more impoverished areas were like having one up between five or six members of the family, which was very, very difficult to get any work done. And we know that with a lot of these last language models and journey to AI tools. There's a paid version. And then there's a free version, but more than that you, there's a high speed internet there's not having high speed internet there's access to the latest software there's not access to the latest software. So we need to make sure that we don't exasperate this digital divide and that we also enable those people who don't have access to the premium paid models or the high speed internet, not to be left behind, and that rather than widen and we choose to close that digital divide. And then what I call the Skynet effect so there's so much hyperbole in in the mass media around, you know, the Skynet from from the Terminator series that there's going to be this genitive AI is going to create this singularity and this AI that's going to lead to a new control over humanity. It's not, or certainly not in the foreseeable future but what it genuinely might do is as I've said, stifle efforts to promote diversity in scientific communication and scientific publishing and science more general by reinforcing existing biases. And whatever you read about genitive AI in in the mass media and social media or whatever. I think those two last points that I've talked about the digital divide and the diversification of science are things that don't really get enough talk time. And these are the issues that we need to think about and they are going to proliferate beyond scientific publishing if we're not careful about it. So where do we need to engage as scientists as researchers and as authors, reviewers, editors, publishers of scientific manuscripts while embedding ethics in publication. So we need to think about integrating ethical considerations into the core of scientific publishing, ensuring the applications align with the principles of responsible research and communications. So, you know, as I'll talk about in a second each user got his own framework, but we also need to think about what does this look like there's some interesting studies going on at the moment but I think we need more work here. Again, we need to think about beyond our own positionality beyond our own privileges thinking about the whole of the wider global scientific picture co creation of transparent platforms. So, I think we could embrace the transparent publishing movement spirit to co create robust transparent platforms with AI, facilitating equitable knowledge sharing. So this isn't talking about making our own from scratch but rather working with technology providers. So actively engaging with tech companies like open AI like Google like Anthropic and like Microsoft to co develop AI tools tailored for scientific communication ensuring that they are fit for purpose and I know that there's some great examples out there already of open and free and ethical AI platforms, but they should all be like that, and a lot of them should be far more transparent. And in making those connections and in engaging with various tech companies, we also need to hold them to account. Sadly need to only look at social media to see what can happen when we don't hold these tech companies to account. And there are many opportunities here, but there's also huge opportunities for profit that unfortunately a lot of people see. So what we need to do as scientists is we need to hold these companies to account. We need to ensure that they contribute positively to the scientific community that they're not responsible for spreading misinformation that they're not responsible for widening the digital divide that they're not responsible for reducing diversity that they're not responsible for reducing the quality of our network impact so yes we need to engage these technology providers but we don't need we shouldn't be doing it as a begging bowl. We should be doing it as a way that we use our position as researchers as world leaders in science to put pressure on these organizations to make sure that they are doing things in an ethical and diverse and inclusive and above all transparent and scientifically rigorous process and innovate continuity. We need to move beyond this business as usual point of view look generative AI is is here now it's a reality. It's really cool. It can make our lives much easier it can make our science much more interesting. We need to be careful of that we need to think about what the limitations are, but why would we continue pretending that it's not that we need to foster a culture of innovation and scientific publishing that embraces the transformative potential of AI, rather than pretending it doesn't exist it's like pretending the internet doesn't exist it's there now. It has been there for quite a while, and a large proportion of the world engaged with it so why are we not doing more to do that. We've put together I think three initial recommendations for publishers for editors for above all for authors and scientists and this is very much in keeping with each of you's publishing framework and guidance that I'll show in a second one do not allow AI tools as co authors. I remember about a year ago, like a couple of papers in quite prestigious outlets were having AI tools as co authors. It's just, it doesn't make sense. It's just a bit silly. And we should ask all authors to outline clearly how they've used AI in their research and write up. Now I don't think we should necessarily just ban the use of AI doesn't really make sense. We can especially, and again, we can use AI to improve our language we can use AI to help paraphrase that paraphrase ourselves. We can use AI to do some of the modern day tasks to help us for example draft letters to editors to you know when you're reading some of your work and you go do you know what that's not quite as tight or not quite as engaging or not quite as effective as it could be please can you paraphrase it or rewrite it in this way. I think that's fine. But we need to make sure that as publishers, we're asking our authors how they're using AI. And we give them that opportunity so that we're not punishing them but we're having that open dialogue and treating the scientists as adults which they are. And then I think have a link to evolving policy. And for me I think what we, what we try to do with eG publications and which I think is a good rule of thumb is that there's a difference between using AI to improve content, which is generally acceptable, which is using AI to generate content which is generally not acceptable. So I think that as a broad rule of thumb is quite good and again, in addition to that, not just accepting as fact, whatever AI produces but rather questioning it, interrogating it using it as another data source that we would with anything else look as scientists were very good at asking questions were very good at being critical, like in a positive way and you know not taking the space value and actually in us doing that encourages others to do the same and actually to treat AI as this great opportunity to develop people's critical thinking skills and question of the world as well. So broad recommendation, don't allow AI as co authors, provide a space for authors to be honest about how they've used AI and then maybe develop policy that enables AI to be used to improve content but not to create it. So here are the eG and Copernicus guidelines which Simon's very kindly shared in the chat, you can scan that QR code as well. But basically the obligations for authors are no fictitious names or AI tools are allowed to be listed as authors or co authors, it's very sensible. Obligation for referees, referee comments and reports should always be written by person since they are accountable and responsible for the content they submit. It is not allowed to use AI tools to generate referee comments or reports. I think this is fair enough right because your as a referee you have quite a large privilege in that you're reading someone else's work that could have taken months if not years to create and using AI to make judgments on that is just not on it's not cool and it's just not polite so please don't do it. And then manuscript submission with eG and Copernicus journals we have, I think, a really open way of doing this. Authors have to declare that I am aware that if I used AI tools to generate part of my manuscript I should describe the usage and either the method section or the acknowledgments. So, recently I used chat GPT to create a figure for a paper I asked it to do a stylized version of the electromagnetic spectrum for a paper I was submitting to geoscience communication. And so I had that in the acknowledgments and in the figure, and I had to take that box when I was in the manuscript submission as well. So, they're the policies that we've got in place at each of you and Copernicus at the moment. I would say this of course but I think that they're very balanced they're very fair, happy to have a discussion with people in the Q&A as well about what they might think. And then, as I wrap towards the end of my speaking bit, I don't want to share this because I think it's hilarious, and this is what we want to avoid. So people might have seen on Twitter about 10 days ago there was quite a large discussion around this paper. So this is a paper called Cellular Functions of Sperma Tonongonkel, Sperm Otter Juniel, Stem Stels in Relation to Jack, Stats, Signal and Pathway. And if you Google that, you will see another image come up that was created by Jonas of AI that is maybe not suitable for a lunchtime webinar which involves the use of, it involves a certain part of a rodents anatomy shall we say, but look at this image on the screen. This is an image that was submitted and accepted in a paper as acceptable. It's junk. Anyone who spent any time using generate AI to generate images will know it often presents stuff like this and it really struggles with using text. What does this mean? It doesn't mean anything. And I think that this reflects incredibly badly on the authors, on the reviewers, on the editor, but also on the journal. I mean, I think allowing something like this through is pretty bad. They have gone on to retract it as well. But again, it's just that thing of don't take everything at face value. Don't assume that it's going to be able to generate something better than you and again be very careful, especially with images as we're not entirely sure where it is or where it isn't plagiarizing as well. Just a reminder that for those people who will be joining us for the EGU General Assembly either online or virtually, we have an EGU great debate around this which is entitled Artificial Intelligence and Scientific Publishing, Blessing or Bain. This is convened by the Head of the Publications Committee Barbara Evans and also by Eduardo. You can scan there to see the session and I would really strongly encourage people to go along because it is a hot topic and an evolving one as well. So there are a few references that I've used to put together this presentation because you know it is an evolving field. And there's my contact details as well. I'm going to stop sharing my screen now so that you can see my face and Eduardo's back as well. And I'm happy to take any questions. I'll pick them up in the questions and answer session. I don't know if Eduardo wanted to say anything and then I can start answering those. I just want to thank you very much for your very interesting presentation Sam. I think it's good to hear about both the positive and the negative aspects of using AI and scientific publishing. And yeah, we have a few questions in the Q&A box. So if you want to have a look. Yeah, I'll start answering those. Of course, thank you Eduardo. And please feel free to continue answering or putting your questions into the question and answer session and I'll answer them as we get to them. So the questions written here can or should AI be used for creative products like poems, songs and images and codes if yes, how should it be acknowledged. So I think this is a really, really difficult question. So with my other hat on as a poet and as editor of a poetry journal. We tend not to, again, we have quite an open policy of let us know how we've used AI in the process, but it's very, very difficult because I think with something, especially with images as I touched on in the talk, it's very hard to tell how creative AI might have used images. So it might have used an element of somebody's work that wasn't. It didn't have a creative comments license. It might have used an element of somebody's work that was explicitly not to be used in that way. Artists obviously rely upon a lot of those permissions for that for their livelihoods. So I don't think there's a straight yes or no answer. Transparency models gets around a lot of this, but I think what however you're doing it, if you're running your own publishing house or if you're running a poetry journal. Have a conversation with your authors and with the editorial team to see what you feel most comfortable with have a conversation with your people that are likely to be publishing with you if you are somebody that's publishing and he was a creative, then have a word with the people of the journal that you're wanting to publish with just to see what their policy with. I don't think there is a right or wrong answer. But I think I always just think about how would you feel the other way like how would you feel if somebody used your work without credit in it or there was a potential that it was used in somewhere without it being credited with that be that is that fair with your particular practices. And if you have acknowledged it. If you have used it just acknowledge it in a way that says I've used gent with anything I think just be honest I have used genitive AI to find a simile for this particular poem. So I, I use, I use genitive AI in part of my poetic creative process. I use it right for example to create Kennings which I'm not the greatest at doing or, for example to come up with a rhyming word, or a way of paraphrasing something often is I'll ask chat up to critique one of my poems, or to say like what do you think this poem is about which is quite a good way of getting to the bottom of it as well so I'm not saying yes, I'm not saying absolutely no. Just have a word with the people that are involved in that process and try to make it more open as well. So I believe I've answered that question. The next one is how can one engage with technology providers can you suggest some platforms for that. This is such a difficult question. Obviously you can like cold call or cold email technology providers which I tend to do. They'll rarely get back to you. The other thing is to publish in this space and to say look, this is some work that we think needs to be done. I'd like to be engaged with people in this way and then you're presenting opportunities with frameworks and to work with them you could work with your universities or your higher education institutes and policy team you could work with colleagues who have links to policy you could get in contact with local policymakers or national policymakers as a way in, or just contact somebody in the research team as well as those technology providers saying look. I'm working in this particular space. I'm really interested in finding out more about how your models work and I believe I have something to contribute. So not just reaching out to them and saying look you're wrong because they're not but just saying these are where some of your limitations might be and I think that I have potentially some answers for this. Is this something you've considered so offering them a potential solution and an offer of help as well as pointing out a problem as well. The next question is, thank you for your nice presentation. I'll acknowledge that. Thank you very much. Thank you for your nice question. Fuavos, apologies if I'm mispronouncing people's names is the first time I've seen them written down. If we use chat GPT for correcting the English and the text that we've already produced should we declare this. I think so, I think we can say that in the acknowledgments in the acknowledgments we can just say chat GPT or gender AI was used in this manuscript for type setting or for corrections. And I would just put that in and certainly thinking about the Copernicus publication system when you're submitting your work, you have that little letter to the editor or note to the reviewer. You can put just something in there as well just saying I have used this particular large language model of genitive AI for doing this particular task. But I think in that instance is absolutely fine and wouldn't be a problem in the slightest. Thank you for that question. So Antereep says being new to the field of research and have an experience with AI tools, I've encountered instances where misinformation is produced by AI echoing your observations. What strategies would you suggest for ensuring the accuracy of our results or what steps can we take to validate the accuracy of our analysis. So I think the key thing is that we can wish I would never use a large language model or genitive AI to analyze data for me like especially quantitative data. I might use it to compare what I've already got, or I might use it as a starting point. Mainly because I just don't know how it's doing that analysis. For people who are more talented at coding than myself, you might want to develop your own genitive AI algorithm like you can do that through through some of the existing large language models and train it using your own data set and you know what's going into it but I think there's always that element of a black box. There's always elements of black boxes in a lot of scientific research. So I think it's whatever you feel most comfortable in but also bear in mind that you're going to have to report that in your work. And as well as the fact that you wanted to stand up to the reviewers and the editors, you wanted to stand up to the wider publics who are reading your work as well so that they can have confidence that the work that you've done with reliability and validity can be repeated and has accountability as well. So my suggestion there would be to just make sure that if you are using AI that you can calibrate it that you can validate it, and again you don't just take everything at face value but rather you interrogate what those results are as well. Next question here, which is from Joe Rowan, which says, thank you for sharing your view on AI. How can we increase the change that research papers are picked up by AI being on the right side of bias? So how can we increase the chance that research papers are picked up by AI? That's a really good question. So I would say in terms of just general strategic communication, it's probably about optimizing search engine optimization there. So when you're submitting your work, think about what the abstract is, think about what the keywords are, think about what the plain language summary is, think about how you can share that work more widely. I mean, this is just all good practice anyway. So, you know, are you putting about it on X or Twitter? Are you posting about it on LinkedIn? Have you shared it on Reddit? Have you shared it with the EGU blog network? Have you shared it with your own blog networks? Are you talking on podcasts about it? Have you done a press release about it? You should be doing this for all of your scientific publications anyway. Have you put it on your email signature so that when people even away from AI are searching, they find that. And depending on what model you're using and how you've trained it, like if you're using chat GPT version 4 and you're asking it to search the web as part of that process, then it'll be using, it'll be beholden to the same search engine optimization as anybody would be. So thinking about how to make your research more visible more generally by talking about those things I've talked about is a way that you will enable it to get more visible through AI as well. And then just, I guess, increasing your platform and increasing your visibility in that digital space. So that one. And then the last question from Barbara, thanks Barbara. How do you see the future of publishing since it might become easier and faster to write papers will be flooded by submissions and what can we as editors and reviewers but also publishers do to ensure scientific quality and credibility and publications. So I think even before AI we can see that the extent to which submissions and publications has just been exponential in the past century decade five years even. I think that's only going to increase. I think that there's a danger that many people. There's a dangerous vicious cycle in that many people who are. Okay, we're in a system sadly in research where we have published or perish where, to some extent, our personal careers are tied up with the amount of research that we publish the amount of which is cited etc. So you could see a vicious cycle in which unscrupulous researchers decided to use AI to create a paper and then that was picked up by a. A not very good or ethically unsound publishing house, not EGU or Copernicus but those that we know are blacklisted for example, and then publishing it and then they've got an output. So I think that one of our roles as editors and reviewers and ethically sound publishers is to have these conversations to be open to be transparent but above all to be scientifically rigorous. So I don't think we should necessarily automate and turn it in and others say that they've got automated models that work for finding the use of AI but then I've used them they're not particularly great. I think myself when I have used AI so much now and being presented with so many students work and other people's work they've used it. To me it's quite obvious there's certain tells when AI is used certain words in English language, for example, delve dive realm. They are all words that crop up quite a lot in the use of generative AI. Certain phrases and endeavor is another one that comes up quite a lot. If suddenly they're using English spelling then it goes to American spelling so this tells, but I think that the future of publishing is the future how it's always been. There's going to be a lot of constraints there's going to be a lot of people wanting to publish this is made it's going to make it harder but we just need to have open and honest conversations with our authors. There's a name to them that in many instances there's an element of AI use that's okay like improving the content of our work but that in every single instance everything should be rigorous and valid and shouldn't be taking it face value. And that actually what we'll find is that people will be driven to want to publish with those journals because journals like ours and journals that have those rigor but also that transparency and that openness as well. So it's not about shutting the gate and it's not about opening the gate completely. It's instead about having a dialogue around what that gate might look like. And then another question now. Is there an underlying implicit criticism when using AI could openly admitting to using AI lead to a perceived reduction in the overall scientific capabilities and skill sets for authors. That is a really good question. It's hard to trust. And so I think having a conversation with your colleagues as well like look, we're right we're co writing this paper together. I've actually used AI on this little bit here or there's just an assumption that we're going to use AI. I don't think people should be penalized for using AI provided they've done it in a way that is ethical in a way that is transparent and in a way that it gives credit to other people as well. But I don't think at all there should be an implicit criticism because there's a danger there as well. You know, in terms of those white Western narratives again. Like it's very easy for me as an English speaking and white male to say, don't ever use AI, even for spell checking purposes, because I might not need to use it as much as somebody else. So I think it's a think about that broader positionality that broader space and how we can use genus of AI and large language models to actually diversify the publication process, rather than making it, rather than narrowing that digital divide and making it exclusive and exclusionary as well. So there shouldn't be an implicit criticism but I think again it's, it's just about having those open dialogues those conversations and hopefully creating a space where we can do that and I think that the EGU journals and Cernicus is a great place to do that. At a meta level as well. I think many journals, my own included Geoscience Communication would welcome studies that looked at the use of genitive AI. So for example, a very interesting topic I would love to see a paper on is the extent to which genitive AI can propagate misinformation in the Geosciences or the role of genitive AI and combatting against the climate crisis, and the role of genitive AI in the future of academic publishing. These are empirical studies that I would love to see published in our journals, and I'm sure Barbara and many others would as well. So, I think that's all of the questions that are answered. Just going to check at wild if there's any other questions that have come up or that need answering. That's a great question because I think we have a lot of early career scientists in our audience today. So I was wondering what resources can be provided to educate the new generations of researchers in the use of AI publications. That's a great question Eduardo and I guess the challenge as well that I can offer to everyone because I know that Barbara and others are on the call as well. I would like to run a couple of short courses on this in future general assemblies, or we could have a wider compare the case science communication, like summer school stroke and training course on it, because there's so many opportunities here. I really want people to use gender I use gender today on an hourly basis like I find it incredibly helpful, mainly for drafting emails that like, it means that I can use my brain for something else. So I think that what we should be doing as a publishing house and as a organization is to be offering workshops and training sessions for early career scientists and listening to them as well. I don't want to create work for Eduardo, but early career scientists that are on this call please feel free to reach out to Eduardo or Simon or myself, and let us know what would be useful for you. And we can try and put something together in terms of what that might look like but Eduardo I think we could definitely put together like a short course on the kind of tools that people can use and how they might want to use it in manuscript preparation and development. So we could get a couple of editors involved from the Publications Committee as well. And again, that's all about having those dialogues and moving away from that implicit negative there as well. I have a question here from Dany. Hi Dany. Thanks for my presentation. Thank you, Dany. Great question. And don't you think that Jointed Bible just enhanced current scientific misbehavior. It's a very, very politely way of putting it. I think it could, Dany. I think that there's always going to be rogues and people in the scientific community and outside of the scientific community that don't abide by the same standards and ethical principles that many of us have built our careers around. But I think it's really important that we don't bury our head in the sand because otherwise what will happen is that generative AI will be used by other people and not by us. And it can be a great tool for good. And it can be a tool to really help us. Eventually, once we better understand what these models are, and we're involved in the co-creation and development of them, we probably can use them for analysis with a greater degree of understanding compatibility and trust. So I think that it could enhance current scientific misbehaviors if left unchecked, but part of that uncheckedness is when we don't talk about it. So we need to have these dialogues, we need to have these conversations, and especially if anybody's listening who are from these tech companies as well, we welcome the opportunity to talk to you to co-create those spaces as well. But certainly an action for Eduardo, Simon and myself, I guess going forward is to think about maybe hopefully people have found this webinar useful as a slightly one way exchange of information. But we can certainly look at putting together some opportunities for how we might provide early career scientists with opportunities for using generative AI to help develop their research skills going forward as well. And actually, this makes me think of, so EGU offers peer review training, as you know, and definitely this is a topic that we should also discuss in our next peer review training, so the use of AI in peer review probably. Exactly. So one thing that people might want to think about, and I'm saying this is a personal opinion rather than an anyway endorsed opinion at all, is even though we don't want, you don't want to give the paper to AI and say peer review this, that's not good. What you might want to do is you might want to peer review it yourself, make a list of bullet points, but you might not necessarily be good at feeding that back in a constructive and critical way that it can be really difficult when you're doing peer review for the first time. So you might instead want to give a prompt to Jenny, Genitive AI and again effective prompt writing is something we can develop a skill set to say, imagine that you are a very collaborative and helpful and constructive reviewer. So these can you take these bullet points and turn it into a review that is coherent that is encouraging and that is engaging. And I think that would be a really effective way of using it and would especially help those people who are new to peer review or where English might not be their first language as well and sometimes that nuance can be lost. So I think that would be something we could definitely do for peer reviewers. And I think that's really important to the thing you mentioned in your presentation, the difference between using AI to generate content and actually structuring your ideas in a very engaging way and Exactly. And response to reviewers as well, you know, like thinking about these the points that I want to make, but sometimes getting that tone, especially when you're not native speaker can be very difficult between, you know, rolling over and being aggressively defensive about that as a native speaker. And I think that Genitive AI is a way that can just help with that. But again, what we should not be doing is just giving it the paper and saying please write the reviewer comments or please do the review, as I don't think it's respectful or helpful in that way at all. I have another question because you mentioned the erosion of the skills of scientists when they use AI in a not so responsible way. So I was going to ask you how can we define acceptable limits for the use of AI publications without compromising originality and independent thought. Yeah, I think, I think that's more a more broad thing with science more generally right it's just like you wouldn't, you know, thinking about doing your PhD like when you do your PhD, you have to you do your literature review there's a lot of there's a lot of failure, but there's a lot of trial by error. And there's eventual success as well. And we don't want to remove that process I'm not saying that we have to make things hard for sake sake. But what we should, you can't learn how to do something by just putting it into a model you have to understand how it works. So I don't think that we necessarily need to say there's a limit, but rather we need to explain if it helps a rule of thumb that I have when I'm using genitive AI is I will never use it to do something that I couldn't do. So I will use it to write maybe like to modify something or even to give me a paragraph on a topic or turn bullet points, but I know I could do that. So I'm then able to sense check it and look into it and to interrogate it as well. And so that's a rule of them that I have that we might want to offer to other people as well going forward but I think that's a good, I guess a good rule of thumb to have. Okay, thank you very much. Yeah, so I think we're running out of time and also perhaps one last question. Several publishers have issued their own guidelines on the use of AI in publications, and they do have some similarities but they also differ in some aspects. Do you think it would be better to have a single set of guidelines. No, because it's a really good question. I think every every publishing house is different, and they have different. They have different epistemologies, I guess they have different like ideologies, and they also have different logistical requirements as well. I think as long as there's clear and transparent guidelines, that's fine and then what's cool about that is that you as a researcher can then look at those publishers and just as I would think I want to only publish an open access journals so I'm only going to publish in open access journals, and some journals don't have that fine. I would say as a researcher, I only want to publish in a journal that has this particular attitude towards genetic AI. So again, offering that choice but being upfront about it is great. And I just think that as long as there's a clear policy, they're going to be different because different publishing houses are different by that respect. That's a great question, Eduardo. Okay, thank you some other any further questions. I think that's all of it and I think we're almost on the hour so good timing as well. Okay, so yeah, time's up now so thank you very much some for presenting to us today and thank you everyone online for attending and participating. Yeah, thank you. Goodbye. Thanks everyone. Bye now.