 Thank you for joining us all today. I'm Korakotas, a community manager here at E-Life and I will be facilitating this call. It's a year since we started these calls where it presents us working on software projects for the benefit of the research community, share updates on their progress and invite feedback on support from others. And because of this anniversary, I would like to invite all of those who connected with us already to respond to a very quick poll to just say whether you have joined any of our previous call already. Okay, we're collecting votes. Almost everybody has voted by now. So just the last three seconds and we'll be closing the poll. So it's great to see that we have some new faces and some people who are rejoining us having attended those calls before. So that's great. And coming back to the call now, I hope that as much as an overview of current initiatives, we'll be able to provide people with an opportunity to start new conversations and hopefully some new collaborations as well. Thanks again for everybody who's put themselves forward to talk. And again, I hope you will be able to see the link to the open agenda. If you haven't opened that yet, please follow that. I have shared that in the chat box. And I also wanted to mention that I'm joined here today by Emmy Tang, who has just joined Eli as a new innovation community manager. And I'll just let her introduce herself. Hi, everyone. I'm Emmy and I'm responsible for the day-to-day running of Eli's Innovation Initiative. I'm also going to be organizing the next Eli's Innovation Sprint, which is a two-day hackathon style event where developers, designers and researchers sit together and develop prototypes and solutions with the aim to open research communications. More details about the Sprint will be available very soon. So if you're interested, please keep an eye on the Eli's last page and or our Twitter at Eli's Innovation. Thank you. Great. Thank you, Emmy. And so now let me just quickly explain how the call will work. So please refer to the open agenda throughout the call. I've mentioned that already. You can follow the link in the chat box and or otherwise from the event post that you might have seen initially before you registered. Please introduce yourself on the agenda and note that under line 25, there are some people that have already signed in to let us know that they are on the call. When you put your name in the box on the top right corner, it will give you a font color that will allow all of us to recognize your comments and questions throughout the document. So if you want, you can pick your color or have the automated one, but this will allow others to attribute your comments to you and potentially help them interact with you following this call. You can introduce yourself and say hello and share your thoughts during the call in the chat field on the right hand side of the agenda. So as displayed here on the slide, if you're not familiar with the ethnic art format and yes, again, during the presentations, you can type your comments and questions as we go under each agenda item. As the initiatives are being presented today, the space is provided throughout the document. Now we're looking for volunteers to take notes about initiatives that are being discussed. If you would be if you would be able to do something like that, I would be very grateful. And please let us know in the chat again in the chat on the agenda if you're if you're inclined to do that. If you want to share a project, please add yourself to the bottom of the of the document for future calls. Or we might even have the time to get to today, but that's unfortunately unlikely. I'm sorry. Alternatively, you can share a quick nonverbal update with us as well at the bottom of the agenda. And if any of this is unclear, please use the chat box on your right hand side to let us know. And again, after each presentation, we'll have a couple of minutes for discussion. Please use the raise your hand button on go to webinar panel to indicate if you'd like to speak. We'll be able to unmute you during that time for questions after each presentation. We're unlikely to be able to have time for all questions, so don't hesitate and write them in the agenda for the presenters to answer in that way. And now the speakers will ask every speaker in their brief five minutes to introduce themselves and present their work. Just to let you know for the speakers, I will be unmuting you when your turn comes in the agenda. And Amy will alert you to that fact in the chat on the agenda as well. And please forgive me if we're a bit strict with the timekeeping for the presentations, it's just to allow for all of the speakers to present. So without further ado, I would like to invite Vincent, who is the first speaker for today, to present about Plaudit to us. Yes, thank you, Clara. Am I audible? Yes, yes, I can hear you. I just noticed that I'm unable to give you the option to screen share, which is a little bit unusual. Are you joining from a mobile device by any chance? No, I actually switched to Chrome because Firefox didn't work with now. I'm very sorry about that. Okay, that makes it a little bit more difficult. Yeah, we'll see how far we get, I guess. Yeah, I'll just share some links in chat then for what I was hoping to demo. Anyway, I'd like to present Plaudit. So Amy mentioned the innovation sprint just now and Plaudit actually originated at the innovation sprint of last year where the team I was in into tackle the problem of wanting to give researchers recognition for their work independent of the venue they publish in because we felt like that helped back a lot of innovations like rapid dissemination of research or transition to open access. So to achieve this, we came up with the concept of public endorsements for academic research. And we prototype a small widget that could be embedded by publishers and prevent service, which I would have loved to demo right now, but obviously can't. But that allows people to sign in with the Arcade ID and to endorse research identified by its DOI and by allowing our publishers and prevent service to integrate that next to their research that would allow them to highlight those endorsements next to this research and to allow researchers to add their own endorsements. So, yeah, we prototype that during the sprint and then after the sprint, I teamed up with E-Life and the Center for Open Science to develop that prototype further into a production ready system. So we are now a few months later and the system is actually ready to be integrated by prevent service and by publishers. I'll share the link in the notes here. But integration has been made pretty easy. It's basically contact us to get an integration code and there's a small snippet of code that should be inserted where you'd want to display the widget. And other than that, we also created a browser extension which works in Chrome and in Firefox that allows you to or that integrates the widget into platforms that have done so themselves. Yeah, unfortunately, I can't show it, but if you install the extension then on certain sites that are supported, the widget will be added to the content there. For other sites that haven't been or for which we didn't explicitly add support but that do advertise their DOI in the pages metadata, the applauded icon is embedded in your browser toolbar that still allows you to endorse that research anyway. And when you make an endorsement, all your endorsements will be visible on your endorsement page and I see here the applauded icon is shown on screen. So that's added by the extension there. Yeah, and when you endorse it, your endorsements will be visible on your endorsement page and people will be able to subscribe to those endorsements. And so basically a call here is for researchers to start to install the extension if you're interested and if you are a publisher or if you publish research or pre-print or anything to contact us and we can discuss whether you'd be interested in integrating the extension or the widget. Yeah, that's it I think. Hopefully I'm now a little bit more audible. Thank you Vincent for sharing your update with us. Hello, Vincent, sorry. This has never happened to us before so I'll need to find out how to tackle that in the future but we weren't able to show the full demo of your widget. I hope that at least the tiny little display I was able to show of the app in the toolbar on Chrome has helped others to understand how it works or what it is. Now the next speaker will be my male... Oh sorry, I've missed the time for questions. So as I said the next speaker will be my male but we'll first go to questions. I can see that Adam Thomas is racist than before but he's actually loaded his hand again. What does an endorsement mean? Is it at the level of peer review? Vincent, would you be able to respond to that? Yeah, sure. So an endorsement it's basically a binary signal that just makes the data available that says hey this researcher has endorsed this research so we've run some UXS and people usually interpret it as a personal recommendation of that research. So at some people platforms you see people actually interacting with the research and giving feedback to authors already and what our hope is that that process will end with someone pressing the button and saying they endorse this. We are working on adding a slight level of extra detail that allows people to also mark it as robust or clear or exciting so to indicate whether the methodology is sound or whether the use of language is clear or whether research is particularly novel. So that gives a little bit more information that basically the goal is to make more data available because people are leaning on impact factor now often and I think it would be good to produce data on top of the informal process of quote unquote peer review that is currently often already happening for pre-prints. It has been explained to me before as a more sophisticated version of a Facebook like thing so I'm not sure if that's making it too trivial for you Vincent One important difference is that we're trying not to focus on a number of endorsements but to lean on the reputation of the person giving the endorsement so we're really highlighting the name of that person and verifying it through orchid that it's actually that person rather than just highlighting how many people have liked it and making it a sort of popularity contest. Thank you. I wonder if there are any more questions. OK, so we won't have time for much more so we'll now move on to Miles. Hi everybody. So today I'm going to talk to you about the reproducible document stack project. Sorry about that. OK, so today I'm going to talk to you about the reproducible document stack project so that's a project we started in collaboration with Stancila and so that's a project we started in collaboration with Stancila so what I'm going to do during these five minutes is I will do a demo of the article that we published last week so it's a reproducible article and I'm also going to talk to you about future possibilities of figures and reproducible figures in the future. So first I wanted to explain to you what is a reproducible document. So a reproducible document is a document that contains code and data that can be viewed but also that could be edited so both the code and the data can be edited and the code can be executed too. So a reproducible document has different levels of output so usually an article, a research article would have flat images but a reproducible article can also have interactive figures and the code of these figures can be viewed and edited and finally the data and the code can be exported. So how does it look like on eLife? So I'm going to show you one example. So that's the article that we published last week. So that's the original article view. On the original article we see a new banner in orange and this balance contains a link that takes you to the reproducible article. So at the moment it takes a bit of time to load the article because we have to load a library and some code but in the future it's going to be faster. So you can see the usual narrative of the research article and if you scroll down here we can see for example a different type of figures. We have a link that opens the code of the figure. So this code is editable. So let's say for example this figure has so for this figure we display the total RNA per 1000 cells so let's say we want to change that to 1500 cells and then re-run the code just by pressing Shift and Enter and now the status is OK and you can see the figure has now been different. So this article contains another figure with some code. You can imagine it's possible to edit the code but now I'm going to show you what will be possible in the future. So different types of figures will be available to you. You can already do these. So we will have a protein and 3D charts but also an interactive map. So let's have a look at some examples. So on the Stensila website you can see some examples. So Stensila is the software, the open source software that is used to produce reproducible documents. So here for example we've got this chart which is using Plotli. So it's interactive and you can see the code here and you have access as well to the data. So it's just a tab that looks like an Excel spreadsheet. So if we scroll down and we have here we have a 3D chart so you can zoom in and zoom out and again you have access to the code and the data. The last one I wanted to show you is the world map which is also interactive and can be edited. So at Elife we believe that the future of research articles and we really want to help researchers to share their research manuscripts in a reproducible format because we think that follows in to tell the full story. That's it for me. Do you have any questions? Yes, thanks Mayel. I can see a question coming from Heather Stings. So Heather I will try and unmute you. Hopefully we'll go without too much trouble about that and you can ask your questions directly. Hi, just a quick question. What if there's already a banner at the top of the page like there's been an update to the article or something? Will the reproducibility banner replace that? Will it be in addition to that? Will it rotate? We would stack this banner to the existing banner. So we've done that in the past actually. So when you have a correction, an article has a correction article, we have a banner. So we would imagine an additional banner on top of this one. Great, thank you. I don't see any more hands rising. I don't have any questions on the agenda. I don't see any, we can probably proceed. And Hannah Drury will tell us a little bit more about the Libera reviewer. So I'm Hannah Drury, I'm a product manager at Elife. And today I just want to briefly talk to you and hopefully demo Libro reviewer, which is our new submission to review application in development. And it's the latest product in Elife suite with tools and applications for under the Libro umbrella. If you're familiar with these calls and you've been involved in them before, you may have heard Paul Shannon, who is Elife Tele-Technology, talking about the Libro publishing platform. And this product that I'm going to speak to you about today, it supports the processes that happened much earlier in a matter of a life cycle. And like Elife's other technology products like Libro, the reviewer is built entirely in the open under an MIT license. And I've dropped a link to our GitHub repository in the video section of the iPad for anybody that's interested. So what can I show you today? In the last couple of weeks, we've publicly launched the first installment of the application to a percentage of users. And what this encompasses is all the steps that it often needs to go through in order to make an initial submission to it. So hopefully this is going to go smoothly as live demos always do. So if you can imagine, this is my dashboard. I've logged into the application via all keys. My dashboard is empty at the moment. So I'm going to begin a submission. And you can see that the setup here is very much kind of a familiar wizard setup. It's going to guide me through the process of submission. We've tried as far as possible to make this a form checking exercise as opposed to a form filling exercise So you can see there that I pre-filled my details using my Auckland account. So it's pulled in all the details that I have publicly available on my Auckland account. Again, any of these fields are kind of editable if I need to edit anything that I've put into there. Right, my cover letter in this text field. For the interest of speed, I'm just going to paste the text in there and upload my manuscript. And you can see now I've uploaded my manuscript to my submission and I've got the option now to add any supporting files that I think the editors may want to look at, may help them in their assessment of my manuscripts. What happened here is technology run by Science Theme in the background has extracted the title from the Word document PDF that I uploaded. So this has been, this field has been pre-filled. Again, it's completely editable so if I need to make any adjustments for whatever reason I could do. So my alphabet type, my subject area. This is keyboard navigable. I can add any additional information that may be pertinent. I come to the stage now where I want to suggest editors that I think may be appropriate. This has plumbed in all of Eli's senior editors directly from our API. And all of the information that's available comes directly from the journal website. So I can use that information to review their expertise and their research focuses and choose the appropriate expert. I can exclude any experts that I think may have a conflict of interest. In this case, there is nobody. If you notice, that took slightly longer to load because actually there's over 350 reviewing editors in life. So I might want to use this search box to narrow down the number of people that get returned. Helpfully looking at this. The disclosure statement that I agree for my information to be shared appropriately. I can cancel that. I can go back and navigate my way back through the wizard and still check everything. But I'm fairly happy with the information that I've plugged in. This group has been set. So basically what's what's happened now is that submission has been zipped up. Into a MECA package. And that stands for Manuscript Exchange Common Approach. If you learn more, I've dropped a link into the etherpad about that. But that's how we integrate with our current system. Because as you notice, the application so far only supports the rest of the editorial process. It's still being supported by our current system. So we need a way for these two systems to interact with each other. So that is the end of my demo. I don't know how I do in time. We can probably allow one question. So I can see that Sean Salani has raised his hand. So let me just unmute you, Sean. And you will be able to ask your question directly. Here you go. Thanks. That was very interesting. So one of the portions of the question about is the reviewer selection. Is there any integration with this PureScout platform that Daniel Easter and others are working on in E-Life? I imagine like going through the text or the abstract and using that to help suggest reviewers. Yeah, absolutely. So at the moment, there isn't any integration with PureScout. But there are definitely plans in the pipeline for us to integrate with more of the tools that are available to us. The application is already utilizing the power of science being like I spoke briefly about. But PureScout is certainly another tool that we can use to improve the experience. Thank you. Great. Great. And it's just lucky that Sean is our next speaker as well. So Sean, I'm just going to make a presenter as well. So you can show your screen and present either a presentation or do a demo as you. Can you hear and see my screen? We can see it and we can hear it. Cool. So I'm going to take a couple of minutes to talk about something that some ideas that we've been playing with and that deal with scientific visualization in the literature. So this is the first thing I like to show. For scientists, the visualization is really half the battle. And there's two color maps. Certainly one color map that you're very familiar with, which is the jet color map. But unfortunately, this rainbow jet and other rainbow-style color maps introduce artifacts into data. And the reason is our eyes perceive brightness in different ways across the visual spectrum. And so there's been an effort about a number of years ago actually to create new color maps that are so called perceptual uniforms. So a change of a given delta in one area of the spectrum is perceived the same way as a change in other areas of the spectrum. So, you know, the other thing to think about is that a rather significant portion of the population or notable portion of the population is red-green colorblind and has other forms of colorblindness. And so it's useful for scientists to think about whether their graphics are accessible to the broader population. And so, you know, a sort of image that you might be familiar with is one that, you know, you get when you go to the, some people get when you go to the ophthalmologist, which is a retinal scanned by OCT. So this is sort of a single array. And you'll see there's, you know, a couple of features here that correspond to, you know, there's a couple of ridges here, of course, from this retinal thickness scan that are actually just artifacts of the color map, the way you map from numbers to colors. And so, re-rendering the sort of image that initial data using Veridesa perceptually uniform color map shows that these ridges are actually, I mean, completely artifactual. So, the image looks similar under models of red-green color blindness, which is really useful to have. So one of the things that, yeah, so there's quite a bit of literature discussing this. So if you're interested, this 2007 paper by Bowen and Taylor is very, is quite good. So, this sort of started with, this effort started with the idea that, you know, one wants to, often wants to show data that's already in the literature during a presentation. And so, me and a Caltech undergrad put together some code that takes any image and can convert it from a rainbow color map to Veridesa. It works on the client side in the browser. You can test it out at fix-the-chat. Caltech.edu. And, but it sort of came out of a simple conversation with a friend where he sort of had a bet, you know, whether we could send an email to every author who's ever published a paper using a rainbow color map to help them improve future visualization. So, I sort of modified this a little, but we achieved it. Where we, you know, send an email to every author who's published a preprint with a figure using a rainbow color map by archive, and it's here's at Jetfighter, we call it called application Jetfighter. So, I'll show you a little bit of how it works and then give you a demo from the website. I just think it's sort of cool. Yeah, well, we'll go to the demo in a second. So, the way this works is you take images, deconstruct them into pixels, turn them into their component colors, use that to map to portions of the color map, and then you come up with just this very simple metric, which is percent coverage. And so, you know, if you think about this, this is really actually not that hard. Here I have it, it's like 13 lines of code, and then the whole application is wrapped in flask to make the front end that you'll see online. So, with that, let me just give you a demo of the application. So, recently we've actually ported the domain to jetfighter.ecrlife.org, ECR Life being the blog for the Life Ambassador Program. And so, what you'll see here is so this is the way it'll look sort of publicly. You have each of the manuscripts that have been that are posted to BioArchive, and then those that are detected in purple and have a rainbow color map detected, and those that didn't have a detection here in green. So, we can just check out one of these, and then whether the author has been notified or not. So, we can press actually this option here, manuscripts with rainbow detection, just sort of look through these. And so, you'll notice here, well, there's an image that has a rainbow color map, and I'll expand this a little bit, and you can press on the image, and it goes to the page of the paper, the PDF on BioArchive, and of course, these are clearly rainbow color maps. And so, it's about, so there's many, the application we've sent out many hundreds of emails, emails are sent through SendGrid, and the images here are displayed using a IIIF server, so it sort of decouples the process of detecting images versus playing all with all the image formats. And so, for those that aren't, oops, yeah, if you look at the address, the images address, you'll see that it's IIIF, just a simple domain, and the nice thing about IIF is you totally outsource the process of rendering the image to a written application, so here we can increase the size. It's something that I think Giovanni at E-Life showed me, so we set up our own server instance. Yeah, so the application works fairly well. The main sort of false positives if you want to call it that, that come up, here I'll open up a couple here, are where you have images where red and green are used as fluorescent colors. This is a good example. The others are also here, so for example here you have red and green, then you have a merge, and of course it's sort of an image inaccessible to if you have red, green color blindness. The best sort of alternative, which many papers, are used in certain sub-circles very common, is using cyan and magenta. So we're hoping to expand Jetfighter in a number of different ways and really just a conceptual idea, but with the E-Life ambassadors to say could we send emails out for fluorescence images for example and personalize that. I'll give you a little example. The whole project is MIT licensed and on GitHub. Here's an example. The email template is on here and one can make suggestions of how to improve our template. It's very simple, just includes a title on the pages that could be improved and a little snippet and then the resources that might be useful. I think we're thinking about expanding the number of directions. You can imagine posting a comment to the Discuss page that's on Amperify Archive. Posting maybe respond to the Twitter comment. Many other things could be sort of fun. The idea is really to use to take this idea of preference and help authors improve the literature before things sort of get set in stone. Thanks. I think that was very sorrow Shyam, so thank you very much. I'm afraid we won't have any time for questions now. I just wanted to move on and our next speaker will be Heather Stain. Just to let everybody know if you wish you can obviously leave any questions you have for Shyam on the agenda and he will be able to respond to those as well. Heather, I'm going to now make you a presenter and that will ultimately unmute you. So, let's go ahead. You guys hear me? I hope. Hello? Great. Hi everyone. I'm Heather Stains. I'm the Director of Partnerships for Hypothesis and I wanted to tell you a little bit about the evolution of our publisher group functionality designed specifically for publishers. Most of you probably already know that Elife was our development partner in terms of creating the publisher-specific features and functionality. But I want to show you how that's grown and spread and in result we have recently celebrated our one year anniversary with Elife so we're very excited about that. I included a lot of links in the document but I want to focus on some other publishers who have used an open group similar to what Elife did which means anyone can see it and anyone can participate. This is an example from the American Psychological Association which has hypothesis deployed across their entire psychnet platform. You'll see like Elife, the annotations launch with a button here at the top. You can see the lovely APA branding as well as the moderation flag in the lower right corner. We've learned that just enabling annotation on a site is not really enough. People are kind of fearful of the blank page. They might not know the tool. They might not know what it could do. So we're working with our publishers on an engagement strategy and developing best practices. One of the Kickstarter strategies that APA has used is they use their top 100 downloaded articles from 2018 and they reach out to some of those authors at the top and said, hey, do you want to update your article? So you can see here Russell Warren has tutored his own horn a little bit in the top card, his being the 8th most downloaded article. He's included a little interview. He participated in... Hi everybody. I'm here with Dr. Russell Warren. He is the associate professor. He's got a link to a plain language summary that he did using the Q-dose tool. So good kind of synergy there. If we jump, I've got these things pre-loaded just for the sake of time. If we jump to the APA publishing page, you can see all of the annotations that they've done populated across their site. This can be searched. There's a nice use of tags. You can filter on a user if you want to learn more about what's happening there. So this is one recent example that you can play with. The other example I really want to show you today is a more recent group. This one went live just around the first of the year. This is the American Society for Plant Biologists and they had a specific use case that isn't open group anyone can participate in, but they had some things that they really want to do accomplish with this. They had been publishing peer review reports for the journal The Plant Cell as a kind of step towards more transparency in peer review, but these peer review reports were published with supplemental material and a lot of readers didn't even know that they were there. So one of the use cases that they're starting with is actually using an annotation card to draw attention to the fact that there is a peer review report adding a little tag. Again, I'm going to jump to it. Let me jump to it here. These are PDFs. It doesn't change their workflow at all, but we're hoping that having the fact that this is visible in the annotation card will get some traffic. Their group page will show all of the articles that they have actually added. Peer review reports too. They've got a lot of information that lives on a member blog which is a separate site. Lots of publishers have multiple sites and annotation groups that can actually span multiple sites. Another use case that they have, they've been publishing first author profiles, again, which live on the Plante site. So here's an example of how they might want to mock up, you know, moving some information from their blog and connecting here to kind of bring things together. So these are just a couple of examples of open groups. Some publishers will have an open group as well as a restricted group designed for a more specific use case mentioned that we work with every publisher to develop an engagement strategy. We find that fitting in with existing workflow is one thing that's important as well as making some decisions in advance of how you know how to success look how to measure it. It's not just a numbers game. We find that only about 20% of the annotations that are made are publicly visible 20% are private and 60% happen in collaboration groups. So what we really see is the tip of the iceberg when we go to a publisher site, but we can provide metrics back that give insight into articles and times when activity is occurring, which is a great metric as well. So that's about it for today, but do follow the links on the doc and play around a little bit and if you have ideas for some other use cases, you know, don't hesitate to get in touch with me and I'll add that into the engagement strategy process to discuss with other publishers. Thank you. Thank you, Heather. I wonder if there are any questions because we still have time for a couple if anyone has questions for this very quick run through quite a lot of different features. But if people aren't able to think of anything immediately, then we can move on to Aravind Venkatesan. I apologize if I have mispronounced your name, Aravind from Europe PMC to tell us a little more about what they're up to. So here we go. Hopefully we'll hear from Aravind very quickly. Hello, can you hear me? Yes. Yeah. Hi everybody, I'm Aravind Venkatesan, a senior data scientist at the Literature Service Group and today I'll be talking about work that we have done with respect to indexing data availability statements. As you might be knowing that we run a Europe PMC resource. It's a database for life science literature and we host a variety of data, be it preprints, abstracts, full text articles and patents. What we have done is work on indexing data availability statements. So what are data availability statements? Over the recent past, journals have sort of to boost data discovery. They have introduced a section called data availability section. It's a self-explanatory section where authors have used or some software they've used and the section has it varies in the way it's mentioned. For example, this straightforward example of a data availability statement where the statement is clear and it points to a specific data repository in this case. Sometimes it's more complex where there's more information packed and for instance you have database accession numbers and project numbers and there's a lot of information that needs to be unpacked and sometimes there's also self-referential statements for instance here, the last line of the data availability statement refers to an appendix which also happens to be that talks about data availability of various datasets they've used. So it is important to index this information so that users could go right into the underlying data and that in turn helps For some time now we've been indexing various sections of the article as you can see here the users could go right and search the specific concept in a discussion section or an introduction section of FIGA Legends. So what we have done is we have extended this with the data availability section as well. So here you could go check for database accessions or FIG share for instance and what we find is the presence of this section has been increasing luckily which is sort of encouraging however what was really challenging was the definitions themselves because they are they could be found in various formats and various labels referring to the same it's not just data availability statements the section labels varies a lot in terms of the position in the article. Sometimes it's a standalone section sometimes it's nested so it was a bit challenging for us to sort of take into account all the variations. For instance here you can see the top 18 labels that are used to refer to data availability so we have sort of combined these and indexed them under one indexed label and now for instance users could directly use the syntax data availability and go to the underlying data and you can read a little bit more about it our outreach officer Maria has written a blog on this and the link is provided in the agenda and that's what I had to report today thank you. Right, thank you very much for that update I wonder whether anybody has any questions so we are kind of towards the very end of the call so if I don't see any questions immediately then I think we'll move on to our next speaker so Matthias Peepery will tell us about what they're currently up to with Manuscript's app Matthias I'm going to unmute you now so I'll be able to hear you when you unmute yourself as well. Hello, can you hear me? Yeah, we can hear you now. Sorry if it's noisy this is somewhat improvised I'm outside because I messed up the schedules so hence also not sharing my screen but basically Manuscript's I.O. is a project that could you go to Manuscript.io that's basically the next generation of this project we're working on that's Manuscriptsapp.com I put this one on Yeah, that's my bad Right, so basically Manuscripts and Authoria are basically joining into a new effort called Manuscripts.io and we've been working on this within Atipon for well over a year year and a half really the goal has been is the beginning to open source the work that we're doing and as of two days ago we started putting out open source resources out of this authoring platform which has well over 200,000 users combined with these two different platforms on it and what we're working towards is really a reproducible article that has code execution support enabled in there but also a lot of integration at the level of cross referencing handling multiple Manuscripts in a project so you can write for example books nicely with it so really a professional academic author oriented writing environment based on the lessons that we have from creating myself Manuscripts and from the Authoria side of the team Alberto Pepe who's also working with Atipon these days so I was hoping to demo this but unfortunately I'm not with a computer myself right now so there is a whole pack of open source repositories up at GitLab that you can find if you go to Manuscripts.io and then click on view source ranging from the document editor components that we've built as well as document conversion services so we do for example chats HTML conversions PDF conversions WordML and a lot of document conversions out of the HTML and JSON based file format that the application itself has I think that's maybe a good couple of minutes summary of what we're working on Great, thank you Matias so I wonder whether there are any questions I do apologize for an outdated link but unfortunately the part you were referring to we just kind of make it work so hopefully next time we'll be able to get the demo from you if this time works out okay understood great so just to let everybody know you can obviously still continue asking questions on the agenda and as any has mentioned already there are some exciting news coming out from Elias but there is also if you have a look at the bottom of the agenda some other people have also dropped information about upcoming events so do have a look also if you'd like to leave a feedback for the call I do apologize for the number of technical difficulties we experienced today that has not happened before so please do leave your feedback as well also if you can or if you don't put any feedback on the agenda even more so please complete the very brief exit survey when you disconnect from the webinar and again thank you for joining us, thank you for the presenters for sharing their updates and as always we will publish a brief summary together with the recording of this call in the coming days within a week or so so again thank you everybody for joining and have a lovely rest of the day or evening depending on the times and where you are thank you