 Okay, 155 people. I think we can start. So, welcome everyone. We are very glad to have you today for this session on COVID-19. Why did we decide to make this session happen? Mainly because COVID-19 had quite a big impact on the organization of this conference. We really quickly had to move to an online format. And we felt we had to discuss, debrief something about this major health crisis. But we didn't want to repeat what many others have already said. So today, we will develop some original focus and dive into the aspects of the pandemics that maybe have been little discussed, but are deeply connected to our interest as bioinformaticians, as scientists and as seed members. So, namely, how did the COVID-19 crisis affect our work as bioinformaticians, and what went on behind the scenes over the past few months? We promised this will not be a talk about the latest chlorokine paper in Lancet. I'm a general from the bioinformatics core facility at the Department of Biomedicine at the Unibass. And I will be co-chairing this session together with Maya Berman, which is communication managers in the seed communication team and scientific team. Hi everyone. So together with us, we are pleased to welcome four speakers from all across Sweden who are among the many seed members to be actively involved in developing and providing tools and resources to the community in relation to SARS-CoV-2 COVID-19. So first, Emma Hotcroft from Basel. She will tell us about the next train story. Philippe Le Mercier from Geneva on what happened for viral zone. Patricia Pallaghi from OZAN on switching the training to virtual mode. And Fabio Rinaldi from Logano telling us about the automated literature discovery tools. So obviously we could not invite here all the seed members that contributed with a tool or a resource to the COVID-19 crisis. But if you can discover many other contributions. Maya, a small interruption. We are seeing the presenter view and not the actual slides. Aha. I did exactly the same as when we tested. So yeah, we could not invite all the seed members that were involved in developing a resource for COVID-19, of course. But you can discover all their contribution on the web page we put in place. And Julien is putting the link right now in the chat for you. So I invite you to check this, the chat regularly because as we speak, we will put links to resources. So it's a bit more interactive for you as well. Today, we will discuss several topics as you know. We will discuss how the crisis impacted communication, especially with the media, the training in time of confinement, also the data curation aspects and collaboration and annotation, several aspects related to open science from collaboration to data sharing and integration. And finally, we'll touch on funding for bioinformatics resources. And regarding this, there will be a very timely initiative that we would like to share with you as well. So stick to the end to know more about it. Regarding the unfolding of the session, we will first discuss for about 30 minutes with the speakers. We have questions for them, Julien and I. And then during that time, you can already use the Q&A to ask your questions. We will collect them and address them in the end. We'll make sure we keep about 15 minutes for your questions. Don't forget to indicate to whom the question is addressed. And also as you're now used to it, the voting functionality so that it avoids you asking the same question twice and also it brings the question to the top for us. So with this, Julien, up to you. Thanks, Maya. So our first big team today is communication and with the COVID-19 crisis, resources developed and maintained by safe groups were really sudden in the spotlights for other scientists, but also for the general public. So we'll start by focusing on one of these resources that did and still does the headlines. This is Next Train. Hello, Emma. Do we hear you? Hello, Julien. Happy to be here this morning. So Emma is a postdoc at the University of Basel and she's co-developer of Next Train. Emma, could you quickly present Next Train and how did you get involved into its development? Yes, so Next Train is a great, completely open source project that was developed at the end of 2014 by Richard Nair here at the University of Basel and Trevor Bedford at the Fred Hutch Cancer Research Center in Seattle. It was originally developed to track the diversity of flu as it spreads around the world year on year. We've expanded to many different pathogens like Zika, Ebola, and before I worked on coronavirus, I was actually working on another virus called Enterivirus D68. So we've been able to use this to track many different pathogens. So what's happened with the rise of the COVID-19 epidemics? So this has really changed how we have been interacting both with science and with the public. We normally get around 2,000 visitors a month and at the peak of the epidemic we were getting 2 million visitors a month. So we've really had to be on top of not only the science but the technology to support this number of visitors. And also compared to previous epidemics, the data has been quite different. So for example, Enterivirus D68 that I study, we've known about since the 60s but there are only a few thousand samples available. In contrast for coronavirus, we've known about it for six months and we have almost 40,000 samples already. We've really seen a big difference in how quickly we get those samples as well. Some of them are turned around as quickly as 48 hours and the majority we get within a month and this is not usually what we have when we deal with viruses. So it's been really exciting to be able to do this truly in real time. Communication was an important aspect. We've seen you basically all over the medias. Was that something you were used to and if not, how did you deal with this new aspect of your work? Yeah, so no, that's not normal for me. At the end of January, I had about 800 followers on Twitter and I now have over 20,000. So it's been a big increase in kind of public attention and public requests. So I haven't been keeping track closely but I know that I've been part of at least 59 major articles, TV interviews and podcasts and every time I Google my name in the news I find out that I've been quoted in more. I've probably received about 10 to 20 media requests at the peak of the epidemic a day and clearly I couldn't handle them all. So one thing that I've done is I have an auto reply email which is really helpful and helps people to help me kind of triage emails and media requests as well as learning about kind of how to handle the media. I've also learned very quickly to be quick picky about who I do interviews with to support in particular media outlets which are dedicated to communicating the science clearly because we have had a problem with misinformation in this pandemic and it makes a big difference. The interview itself with people who are prepared and asked good and interesting questions that I haven't answered 100 times already versus those that kind of just want to repeat the same article that was published last week and I definitely feel like the public gets more out of it when I can do an interview with people who are asking good questions about the latest science rather than rehashing kind of the story from last month. So this was a unique opportunity for vulgarization. For example, you could teach the public about phylogenetics. I've seen articles in major newspapers across Europe which concepts or results were hard to convey and for example, did you manage to convey that scientific work is a slow process that we don't have all the answers? So this has definitely been a big part of the communication with the media particularly when it comes to phylogenetics. As you can see on the slide that's being shown it's kind of beautifully dangerous. We have these really wonderful interactive and colorful trees that really encourage people to explore them and click on them. But of course phylogenetics itself is a science and there's a lot of uncertainty and a lot of limitations based on the data that we have. We for sure do not have samples from everyone and we know that our data is somewhat biased and it's very tempting for journalists to click on these lines to see connections between one country another and to kind of tell a story with this. But the problem is very often we don't have the data to say whether that's actually what happened or whether they're sampling bias or other scientific uncertainty in there and this has been hard for us to convey. We've had to be really careful about what we say particularly online and to the media because things that we wouldn't hesitate to say to another scientist because we know that they understand the uncertainty around what we're saying can be really misinterpreted by the public and by the media and it's really hard to claim that back once it's out there. So we've had to be much more cautious on the front end about being very clear in what we say and very clear about our limitations and our confidence in that. We've also had to make some changes well we chose to make some changes to our website as well to try and help people understand things a little better. So we've we've changed for example the mutation rate so that instead of showing this very small number that reflects the substitutions per site per genome which is very hard to interpret if you don't have a genetics background it just shows the substitutions per year for the whole genome which is a larger round number and it's just much easier for people to conceptualize mentally. We've also developed tools to try and help people understand what they're seeing when they visit our website so in particular something that's been very popular is for a few months we did what we call situation reports where we released a narrative meaning that we changed the picture on next train and the text that's explaining it at the same time to walk people through what did we find this week how do we know that and what does what does it actually mean and we found that this was really well received and helped us reach out to the public very clearly. Thanks Emma we'll get back to you but since the skill you're continuing from communication to training we'll take this opportunity to switch to Patricia. Patricia can we hear you yeah. Yes hello everybody good morning. So Patricia this COVID-19 crisis was obviously a challenge for the state training team. Oh yeah it was so we had to change the way that we were doing training from one day to another one it was a real challenge because we had at the moment that we were locked out we had planned for the next three months we had planned 15 courses and we had to find a solution immediately to see what we are going to do we didn't want to cancel the courses so we had to act quickly also and we had also we had some experience in the past of doing e-learning things like modules or being part of pre-recorded virtual seminars or in city co-talks but it was nothing to do with what we were doing because you see the the seed courses are very practical we teach people how to program we teach people how to analyze data how to use the tools and resources of the SIV and other bioinformatics tools and we wanted to continue being very practical also so we had to really find a solution quickly on how to move on to being to teach teaching online teaching on the streamed versions and it was interesting also because we had we had to discuss with and and convince the people teaching that it would be possible and it would be feasible and we were there so the training group was there to help them so it was really a good collaboration when and people were very positive on that so the trainers and the people teaching so the group leaders and also members of the SIV were very positive about it and they jumped into the adventure with us was great it seems that this turned out quite well yeah it turned out yeah well because you see from the courses that we had planned we only counseled Tree because it was very short timeframe to them to the teachers to change into online but at the end we have managed to do until the end of summer we are going to have had 14 courses so it was really 14 courses streamed live and it was really well yeah and people were very all we have received many messages also from people saying from that could they could attend from anywhere it was really good for them they were grateful it was a good experience for everyone yeah so far so good so these courses are now available post-treaming yeah we we have done two kinds of courses so the courses that we have done on the SIV resources for example we have changed the way that way they were going to happen to separate the lectures from the practicals because the practicals we record the practicals not very interesting for everyone so the lectures of those that are SIV resources they are now on the youtube channel they are not super hundred percent beautiful lectures but at least they are online so this is also something that we had to do quickly it was to transform the lectures into videos put them online it was something that we didn't have many people in the group doing this so we had to jump into that but now you can see if you go to the SIV youtube channel that Maya has put the link you can already see some of the videos and some are still coming up so continuously updating the videos that we have on the youtube channel cool what are the plans for the future after the crisis well we uh we have been discussing and thinking of how we are going to do this and we there's a real many people prefer still prefer the face-to-face courses is a different way of teaching of course when you are online you have other challenges you don't have the feedback from the people you don't see the people with whom you're talking so it's challenging but there are some other things that are very interesting for the streaming of the course that is that you can reach people anywhere we have less traveling because people can attend the course from anywhere also so there are very positive things about this so we are currently discussing on which courses can be done online can continue to be done online which are the courses that we prefer to do on a face-to-face live in the in the core in presence so the plan is still evolving but the idea is to have a mix face-to-face in places and streamed courses and also have more more e-learning modules that we are going to start working on more deeply now so we're looking forward to that and maybe to finish uh let's mention the series of webinars that you did on the scientific aspects of the COVID-19 yeah we also take complementary to this session I would say exactly yeah it was very complementary to the so uh without the the COVID situation we saw that also there was there was a need to do some small webinars about specific topics either receive the resources that are have uh COVID data and talk about them give the time also to people to ask questions so we started a small series that's going to last maybe a little more longer so we had some uh one webinar on the philip of the viral zone that philip is going to talk also later we had one about uh the glycobinformatics also from Frédéric Lisacek because there's one coming up on v-pipe by Ivan Topovsky and we have I have to announce today one about the statistics that is going to be done by Frédéric uh shoots so and also those ones they are over you have also the videos on the youtube channel thank you Patricia uh so you mentioned Philippe Le Merci he gave this uh cib women arm COVID-19 and he's one of our speakers today so this brings us nicely uh to our third topic today uh data curation uh an annotation and we'll first start with uh by taking perspective from a knowledge base viral zone to discuss some aspects of uh expect curation in times of health crisis Philippe can we hear you yeah hello glad to be here hello uh can we can you quickly present viral zone and tell us how you came to develop it yeah okay so um I am a virologist I've been working like 10 years in laboratories before moving to swiss plot on my world was to annotate a viral protein there and doing so I realized I had to gather a lot of knowledge to be able to do so about the virus so what kind of host it has in which cell it was it was in in cytoplasm because viruses are so broad so different and gathering all this knowledge I could not put it in the swiss plot database so there was something lacking there are books there were books at the time but there were pretty much all the time because it takes like more than one year to publish a book so when it's published it's already outdated so I decided um with swiss plot we decided to put that in a in a website so we created the virus zone which is kind of the dream for the virologist with all the main knowledge for all viruses was included in fact sheets and um it it was like 12 years ago we launched it and it was a success pretty instantly people really like it and as a virus focused resource it seems logical that the covid-19 crisis hit you with full force so what were the immediate effects on viral zone yeah so in viral zone we we had of course a better coronavirus fact sheets since more than 12 years we have to update it so in in January when we we see the so the virus coming we started to update it and we birthed we built we built and we added a lot of stuff so as to have a coronavirus special resource now in viral zone which gives a lot of information it's mainly information from from scientists and people from medicine so we have no journalist coming there at all but uh it was quite popular because in barzone we have about thousand people a day visiting in average and here we are the peak of 8 000 for all these monsters the people went a lot on this resource and in relation to your curation work which difficulties did you encounter in the in the past few weeks yeah it was kind difficult to access to to reliable information I mean for example some information I still store in books which were in the office I could not have access to um also communication with colleagues was harder because all the biologists and especially the coronavirus biologists were completely swollen and their mates and phone calls so to communicate between each other it was difficult people have no time um also um where there was a lot of of bad information circulating so uh I used a lot twitter actually was following the right person you could have access to very nice uh data um also some podcast of uh uh american virologies famous who were very very good and accurate and helped me a lot to to focus on what should be done on your kind of annotation very little is known actually but you were saying about the south coast too yeah the south coast too um actually right away people were jumping on a conclusion on it but very very few is known on it we have some information on south coast one uh which occurred in 2003 so 17 years ago the problem is that when it occurred a lot of money was put to the research for south coast coronavirus at that time uh so between 2003 and 2005 it was a lot of people working on it but in 2005 the attention focused more on h5n1 so it was like the next pandemic will be a influenza and kind of all the money switched to it as well and very some research have continued of course on coronaviruses but it has slowed down a lot so uh yeah yeah there has been a gap on uh it's a pity because in 17 years we could have known a bit more on these viruses and there are still things to explore actually you said that some information that you found was not so reliable yeah there was a lot of preprints which i'm not sure if it's really a good way to do in bio archive and made archive i mean we made a publication in february on covid and it was accelerated in one week it was out and reviewed so all these non-reviewed papers it was not really a high quality scientific as you said previously science is going slowly and steadily and this was not the case it was completely crazy for example when people in January when we had the sequence they made a 3d structure of it or prediction and they said that the virus could not bind the human receptor therefore there was a publication saying that the virus could not preach human to human and two days later it was clear in china and it was spreading human to human and you know the reason so there was a kind of of publication like that getting crazy on making the buzz i can i can understand that in this time so where people want to really sparse the information there that's the reason for the increasing quality but i'm i'm also thinking about the debate whether non-expert scientists should pivot their research to covid-19 sometimes the cost to the community really outweigh the benefit because the express time is wasted at debunking these errors yeah yeah actually a lot of people switch without knowing having knowledge of the virus and switch to publish things on it and yeah you've seen i've been reviewing some papers which were really people are lacking any knowledge of the virus always making false conclusions of course so in this time you see the importance of getting quickly some knowledge resource for everybody to get access to the data i think virus was helping in that way thanks Philippe maybe another track to follow is automated tools because first they are probably faster and helpful to go to deal with this really huge quantities of information in such times maybe they're also a bit more objective maybe we can transition here to our last speaker Fabio whose group focuses on the extraction of information from textual sources do we hear you Fabio sorry my mic okay so your group is not especially in biology epidemiology but you choose to help with the covid so can you tell us how yes thank you for inviting me so i'm really at an intersection of two communities the bioinformatics community and the computational linguistics community and from which i originally come from the reason i joined the association of bioinformatics a few years ago is that for many years i've been working on information extraction from the biomedical literature and an activity on covid 19 for us was was a choice i mean it was a bit as i said a bit outside our original scope but it was actually a natural choice because we could use several of the tools that we develop in previous projects and that we regularly use in our work particularly we have efficient accurate tools for information extraction from the literature and for analysis of social media trends and coming from another community the computational linguistics community it's interesting to see that there were strong reactions to these epidemics in many scientific communities and in particular in the computational linguistic community many groups attempted to do literature analysis a lot of people a lot of types of text or media analysis related to covid 19 the nation library of medicine the u.s. prepared an automatic method selection of literature related to covid 19 another well-known institute allen ai institute organized a challenge where different developers could compete on discovering information into literature and they proposed several questions to the participants questions like what do we know about diagnostic diagnostic and surveillance or what do we know about covid 19 race factors and so on and so forth and the goal was to find this information automatically in the literature or actually develop tools that could do so to actually evaluate the tools not the answers to these questions and the acl which is the topmost conference in computational linguistics organized in an emergency workshop dedicated to tools related to covid 19 so please provide a good intersection with the thematics developed by the in our work with recent institute of bioinformatics can you present your projects related to covid 19 and what are the main things we basically focus on two activities one is literature based discovery where we analyze scientific literature related to covid 19 and try to extract from it from the literature interesting facts relevant facts and another project is about analyzing social media trends and twitter and following out a covid 19 epidemics influences the practice trends on on twitter for example we have we see what tracks are mentioned in the people in the literature and we see trends to a clinical sentiment analysis where each twitter associated with a positive or negative tendency and for instance a curious fact is that the hashtag related to president trump had strangely quite positive sentiment for most of the crisis until he said the stupid thing about disinfectants taking disinfectants and suddenly the sentiment for social media dropped and I could see many interesting facts and we have tools that allow basically anyone to inspect these things if you were stressing that overall the idea was not really to generate results but really to provide the tools for others to use exactly it's not a purpose to analyze the results themselves what other scientists find our purpose to provide tools that allow other groups to find what they need so the purpose mainly is to get quickly to the relevant information in particular analysis that we did related to covid 19 are provided to other groups we send to for example europe emcee which is database literature maintained by the ebi in camry all our analysis that are integrated into text and text searches we also distribute these analysis to other groups for example a group in japan maintains a service that shows literature related to covid 19 and that slide that you see now shows an interesting trend in publications about covid 19 so basically these are all the publications in pub mat related to covid 19 by day and you see that there is a peak around early may of about 400 papers per day published on covid 19 the tools that you see in the graph are the weekends basically on weekends papers are published so and but the average is impressive and with the number of papers obviously quality makes myself as you were saying before so related to what we were saying before that the quality of the scientific production seems to decrease in in regard to the quantity you were saying you did not try to evaluate actually how this affected your tools well it is not our aim to judge the quality of specific papers we only aim at finding relevant information but in any case these trends that you can see in illustrated by these slides are revealing we all know that papers were fast tracked for publication during this emergency because they were thought to be relevant and it could not be stopped by the regular process of peer reviewing and this fast tracking probably affected the quality I thought we cannot judge individual papers it is a least conceivable that some of them are of let's say modest quality and this has been confirmed by recent events the two papers retracted recently delivered to data provided by soldiers fair so the two papers were retracted and they made big news or retracted because of full data quality but of course the fact that they were published fast and without checking the sources is an effect of these trends that we see thank you Fabio thank you very much Fabio we'll have to to skip now to the to the next topic and try to fit everything in and from what was said obviously you're going to hear me yes from what was said obviously whether it's an automated resource or an expected curated one the purpose is the same for all the resources it's to be useful accessible fast and to support the global research efforts for covid so this takes us to our fourth big theme today which is collaboration and how the crisis affected the way people access to data the way people collaborate the way people share the the data and having these learnings in mind could really help for the next major health crisis as well so let's go back to Emma who also had this aspect of open science during the preparation Emma what can you tell us from the next train perspective here yeah so previously there was some some hesitancy in sharing sequences people often prepared to keep their sequences private until they had a publication because as we all know those are kind of the currency of science so there was often a lack of sequences during epidemics as people kind of kept them to themselves and of course those are actually when the world in general needs those sequences the most for covid 19 we've seen a great shift in this and I do think there's some credit there for the first scientist in China who sequenced the genome and put it up online openly available immediately to set the precedent for the epidemic and so we've since seen a huge increase in the number of sequences that have been shared and how quickly so as I said before we have almost 40 000 genomes right now the one next train just for computational reasons we show about 4 000 per build there's only about a delay of about a month between when sequences are collected and when they're put on the online sharing platform called gisade and then we get them on next train within 24 hours of that but 25 percent of our samples we've received in less than 20 days and that's amazing considering kind of the history of how long it would normally take for sequences to go online so it's really been quite inspiring to see how scientists have come together in the coronavirus era to really up the the game on how quickly and how openly sequences can be shared and I think that it's shown what we can do when we work together for a common goal rather than kind of prioritizing what's best for us academically or kind of as a career stage I do think the world will benefit from the open sharing that we've seen and I hope that as time goes on we'll continue to see scientists being open to sharing these sequences for all kinds of viruses and pathogens as quickly and as openly as they possibly can so thank you very much and and Philippe you also had some nice experiences of collaboration and also challenges um tell us about this very briefly if you can so you have to unmute yourself yeah okay so first thing I was completely amazed by the speed of research I I see that's gone in the last months I mean I've been through several epidemies SARS H1N1 Zika Ebola MERS and never it didn't be like that and it's because people provided a lot of data and collaborated very quickly for example the first sequence was released in the 12th of January so I started working on it right away to annotate it um but uh we we find for example a potential rule of integrin binding so maybe the virus is bending more than it is too and we are able to publish it because there was a lot of sequence going there after like 20 days there were a hundred of sequences there was three models which made a very good prediction of a structure which allowed us to to actually publish the paper so it was amazing all this stuff and I think that all the speed of the research have been going for this virus is mostly because of bioinformatics and people sharing data um then in uniprot we also had a problem because we have a two months release so it means that what we annotated in January will be out in April in date April so uniprot had to to change it's way it's doing so there there was a reflection within the consortium and all the people in ABI and Swissport worked a lot to provide a pre-release data which had been very popular and not also of users have given us a very good feedback to to to update and modify these proteins great um and just from an end user perspective and very briefly um so the the right now with the crisis a lot of resources um sprouted everywhere and um it must be very difficult to get to the to the data itself so how was what was done on the data integration aspect maybe you can just mention the the COVID-19 integrated knowledge base that was developed that's it yeah it's mainly the work on Gervaine and many people in Swissport it took like two weeks to develop and they gathered a lot of of data from database you can make some query with Sparkle on it it's it's been released very early I know one week ago and so it's really a good effort to to to put all the data in one place that you can query things in order that there is at once cool thank you um and well this takes us to our last topic which is not the least important one of course which is the one of um sustainability um what the crisis revealed from the the COVID crisis revealed was really the importance of long-term sustainability for bioinformatics resources who had to be there to address the needs so um let's switch to Emma again um to address what was um what she saw what she seen in uh in next train yeah so we're incredibly lucky at next train because we're supported by core funding from the University of Basel and from Fred Hutch and this is really essential because it means we've been well funded for the past few years and this means that we have been able to hire developers to work on our code and we have lots of postdocs doing both the scientific and the programming side so just in the past year we actually refactored all of our code which made it much more modular and much easier to adapt to new pathogens and of course this is critical we wouldn't have had time to do this during the epidemic but this is exactly what let us switch and pivot so easily to working on you know from working on flu ebola these other viruses to working on the new coronavirus in essentially no time at all lots of other platforms were not able to do this so agile in such an agile fashion um so this is shows that it's an investment because we're now able to we run the site at least twice a day now and this is something we've been able to adapt to of course it's been a lot of work but it's been in the realm of possibility and that makes a big difference but if we hadn't had that investment over the last few years we could be in a very different position um so the long-term funding is really instrumental to developing tools and developing the science before you need it for a pandemic because in a pandemic you don't have the time to do that development and it's also really important for us to get more funding so we very recently got um the an emergency grant for the COVID from the Swiss SNF um but it's I think that a big part of us being able to get that is because we can show already that we have a lot of the infrastructure in place of course we can expand and do more things with it but we don't have to say that we're developing this from the ground up or that we need a huge code overhaul from the beginning and I do think that means that we're more attractive to fund because we're a bit less risky and we can show that we have some results already so I do think the take-home message is the importance of long-term funding so that we have it to rely on we have the tools and science that we need when a pandemic hits thank you very much and yes um this issue is actually getting traction internationally right now so that's the the initiative we wanted to mention here um if you haven't heard about it yet it's the global bio data coalition um and it's basically coordinating approaches among funders to enable the sustainability of uh infrastructure and resources in in um life science data so um its setup is also coordinated by CIP so if you want to know more uh Julien has put two links um and also there is a virtual meeting that's going to happen that's open to any stakeholder if interested so um it's there and have a look if you if you want to join um so we'll switch now to the questions and we'll end on this positive touch for resources funding and before we go to the questions I guess we just have seen how many interesting topics and close to the heart of the CIP members were affected by the by the crisis and also fostering the creation of new resources that are going to be here to stay including for teaching so Julien yeah thanks a lot to all the speakers today um now we are excited to hear what the participants think and so keep asking your questions through the Q&A discussions can be to the speakers but also you can comment on your personal experience it's possible to raise their hands once once we went to through all the questions and maybe now we'll switch the mosaic view so that you should be able to I'm not sure you can activate your camera maybe but that if you raise your hand we can allow you to speak um maybe let's start with the first question yeah so there is one question from Yvonne Topolski from V-Pipe who is a question for Emma do you consider using more fair sequences sharing platforms such as the M-BIL, EBI, European Nucleotide Archive, N-A and for example Giss head has received criticism by various set-exert projects yeah so this is definitely a topic that we get a lot of discussion around and it's something that we're very aware of at next strain um the the main thing here is that most people have chosen to put their sequences into Giss head so Giss head has by far and I mean like by tens of thousands more sequences in it than any of the more open databases there's a lot of reasons for this though there's criticism of how Giss head allows these sequences to be accessed it also is founded on a model where a lot of this is to protect smaller countries where they have a real fear of those sequences being scooped and published before they have the chance and Giss head makes this dance a bit more equitable in that it's you're you're not supposed to you have to agree that you're not going to kind of scoop someone else's sequences and particularly for smaller and less well-funded scientific communities this is really important but we also recognize that there are some problems as far as people getting data access and this not being open access um this is something that is being it's something that we're actively discussing and that we're very aware of but at the moment the majority of sequences are in Giss head and this is the main reason why we pull sequences from Giss head but we are looking at for example integrating this with more open source sequences from other platforms thanks uh maybe now a question let's take the once with more a vote uh question to Philippe uh could you redistinguish the quality uh of preprints versus the really fast track peer reviewed paper yeah there are different level of quality on this paper of course when it's had been peer reviewed even if it's abstract it's uh you have better confidence in it and you have better confidence in a conclusion would have written in the paper whereas things that are made archive and bio archive you are not really sure if you can rely on that um so I could annotate things from fast track peer reviewed paper without any problem but when it wasn't bio archive uh you had to to have to search yourself if things are really reliable and if you can take it usually um I would trust the group I know are doing a nice job and for the others I've waited for it to be a peer reviewed to annotate thanks Julien we have another question of coming from Julien Rack in Lausanne I think for Patricia uh what about doing courses on site but where the lecture part would be recorded and broadcasted live or afterwards on the sieve channel it's a very good idea also um we are we have been thinking of this of course there is the technical question that can happen when you're recording on site and then the editing that we have to do maybe afterwards but it's one also one other option that we have been thinking is that record pre-record the lectures for a before a course and have the live sessions to the practical side so the hands on really so yeah it's one also recording on site could be also an option we're thinking about this too there's actually another question for you Patricia um did you notice that after these online stream courses the number of people registering was was higher and what are your current thoughts about handling higher demand in case you keep offering online courses well it has the the the number of people registering has increased a lot that's for sure and um what happened well some courses already had we have many courses with waiting lists because we cannot have or everybody accept everybody because simply because of the space in the room but now with the online is also is not a question of the space in the room now but the quality of the how many people we can handle inside a breakout room or an online room so we have many more people registering and we have to figure out how we can handle this in the future is true um yeah uh we could there are many options on how to do this we have to consider also that we have it's not a huge number of people working in the group we have uh and so organizing all this takes time also the people teaching we have we count on the groups of the siv to teach also so we have to make a good balance between all that is the people available the number of days of in the calendar and also the people helping out that on that but it's true that the online just to finalize that so the the the SARS we and the online courses all those the the lectures that were that could be streamed live and open to anyone we had a huge number of of of registrations it was incredible thank you so for the keynote that are that is following uh people will actually connect to the same link so while still people are connecting we can maybe ask one or two questions maya do you want to ask a question there was one for Fabio yeah so there is um there is a general comment I guess uh from Emmanuel Boutet who's mentioning that collaborative efforts should be focusing also on crops energy biochemicals and not just on health diseases so I guess it's a general observation and for Fabio Rinaldi Sven Bergman asks whether the curve why the curve of the number of paper per day resembles one of infections with some delays so maybe I can just go back to that slide well it's an interesting observation uh it probably reflects the urgency of the situation the pressure that people feel and not not only in their daily lives but also in the scientific world the pressure to publish and and from for the authors of articles but also for the journals and and and and and and and and and the organization like the National Library of Medicine so I think it's and and and the delay is obviously due to the time it takes to to to write papers submitted and get accepted which is much shorter during this time much shorter than usual normally it takes months to get the paper accepted if not more and during this epidemics papers were accepted in matters of weeks or less thank you and I think we have no more time for more questions I just want to mention that on the Slack there's a Slack channel where can where people can follow up discussions if I'm correct this is called follow-up-discussion so you can ask your questions to the speakers there if you have some more I would like to really thank everyone there's much more to be said probably but time is limited so thanks for the participants for the question thanks a lot for the speakers for being here today