 All right then, let's get started. Thank you everyone for joining us in the wrap up for the NFCOR Hackathon. End of day three, it's been a fantastically productive few days. We're just joking about everyone's faces look a bit more, a bit less fresh-faced than a couple of days ago on Monday morning. It's been a frantic few days of intense work, which is exactly how a hackathon should be. So a great effort everybody. Thank you for putting your hearts and souls into it. It's much appreciated. So we're just gonna have a quick round-up round, go around again today, like we have done the last couple of days, we'll go over the different projects, discuss what everyone's been up to, what the big achievements are and how much everyone's done. Then Matthias is gonna try and give us a few stats about the kind of outputs we had over the past few days and then Maxime will just give us a quick overview of the social, which we'll go on straight on after this in the same Zoom meeting. So yeah, in case I forget at the end, I'd like to say a huge thank you to everybody for joining, for spending time on this, coming together as a community. I've certainly had a lot of fun the last few days. It's been really good to kind of catch up with everybody, lots of discussions in there and really kind of fun, just focusing on nothing but this for a few days. So thank you for putting your time towards this, it's very much appreciated. Right, so do you wanna take over and talk about your project? Yes, sure. So I'll be introducing what we worked on these days in the pipeline group, I'll share my screen shortly. So a lot of people joining for the Pipelines group this time, we focused on contributing to existing pipelines, creating new pipelines, forwarding pipelines to the SL2 and going through template updates. And you can see in this long list of group members that a lot of people contributed to these. It was a very fruitful hackathon. So really happy to see also people collaborating on the same pipeline there. For Sarek 3.0, today the last day, Maxime finally merged the 358 PR and starting to work on integrating modules that we're creating during the hackathon to the pipeline. For the single cell RNA-seq pipeline, Sandram was mostly taking care of updating the template with some review help by Alex and myself. And the 1.1.0 release is really close. We're just waiting for this final tools release and hope we can still get there today. For HLA typing, Alex was working on adding the Yara module for the DSL to conversion and started also to convert the pipeline to the SL2 for the Mathilsic pipeline. Patrick was working quite intensively on it, updating the template and also porting the pipeline to the SL2. It's in general in good shape. And still working on it though. The PEN genome pipeline, so Simon was working on it this week together with Andrea Warachino as well. And yeah, they added several sub-modules and also did the template updates for the pipeline. AmpliSeq, so several people were contributing to AmpliSeq this time on the top of my mind, Daniel Straub and Emily and Daniel Lundin. There were some template updates and also the DSL2 version is quite advanced I think right now they've been working on also updating the and uniting the GTDV reformatting and parallelizing the CI test profiles for the test and moving the various cut it up steps into a sub-workflow as part of the pipeline. The VACMAP pipeline, so there wasn't an existing pipeline in another repository, but this one was created new during this hackathon and many different people were contributing to it. Anthony and I was under and Alexandra mostly and yeah, they've added several steps today, fast tree. They made sure that all versions were recorded other genome estimation and yeah, several developments today. So this pipeline has really progressed during the hackathon. For VISL magic today, there were not a lot of developments there mostly yesterday, the last developments and for eager today, James merged the template updates together with Maxim Antizias and fixed a multi VCF analyzer issue today for the Mac pipeline. Sabrina was mostly fixing some bugs there and continuing on the DSL2 transition and GTDV TK classification. Sure. The BIA proteomics and MHC-Quant proteomics, Leon was doing some further work there today and also contributing to the Metavoid NITR pipeline. Yes, and I think he was working on a couple of PRs that hopefully can be merged tonight as well. CRISPR-Quant. Yes, some people were doing work on fixing some issue with the help of Maxim and working on getting the pipeline for a first successful run. It's going to be a new one. And Soik RNA, Barry was working on the pipeline quite intensively these days and today he was fixing some issues with some input parameters that are there. Yeah, and I have to say it's been a real fun to be part of this team this week and thanks to everybody that contributed to this. Brilliant. I have such a lot of material there. It's just kind of, it's a bit, I feel like it's kind of, not quite doing a good service to try and get you to go through all of that in such a short amount of time because it's like a huge amount of work. It's really impressive and quite a few pipeline releases and lots of template updates and stuff there, which is really good for long-term maintenance and kind of health of the NFCOR community. All right, who's next? Alex. You're up. Okay, look at that. Okay, I've been fortunate to work with the documentation team during these three days in the hackathon. So we were also quite a bunch of people actually. So the beginning, it didn't look like we were going to be many but it definitely turned out that we quite acquired some people who now joined the team. So first step, one of the major points that we actually had on the list was to convert parametre documentation to the new Jason schema to have that for more pipelines available for those that did not get the update yet. So I did that myself for the HLA typing, Sangram did that for the single star RNA-seq pipeline which will hopefully help us then later on to also get to the DSL2 version. Tour was actually working on the MethodSig. However, there was already one Jason schema update in the depth branch, but nevertheless they got that sorted out, I think, together with Patrick and Moffill. Hanke was actually extending two sections in the introduction of the NFCore config docs which will hopefully make that a bit more clearer, especially to beginners. Some people were working on tutorials. So James added a new tutorial section to the NFCore website on the usage and developer documentation. And then people were actually following up on that and contributing and working on adding actual tutorials to this section so that it actually is filled with some real content. So James started on Tuesday already on a step-by-step tutorial on how to write institutional profiles. Hanke worked on a tutorial for adding institutions to NFCore which was just now merged actually a couple of hours ago. So this is already there now. And also Francesco was actually working on a DSL2 module tutorial with some fancy screenshots on how to do this in a step-by-step approach. He's still working on it, but I think what I've seen at least looks really promising and helpful for others who will then potentially have it easier to actually do that. And then a large fraction of the work that has been done in the documentation team was actually working on the byte size talks time description. So all of the byte size talks that we have at the moment have been worked on, not all of them, but a fair amount of them has been worked on and people were actually starting to transcribe these two texts. There is some automation available to do that by YouTube, but this has some funny quirks. For example, for some names or special words, it's really not working that well. So yeah, that has to be manually curated. That turned out to be fairly time-consuming as far as I heard from Renuka, Jim Ho and Johannes at least. But nevertheless, we have achieved quite a number of them now. So at least the first four are already merged. And I think people were actually working on transcribing the 6A and the 6D transcripts as well. And the fifth byte size talk has already been worked on as well. So we're getting close to actually having transcriptions for all of them very soon. And all of them have been added to the website as PRs or will be added as PRs soon to the website. We also had additional achievements during the documentation team work. So some documentation singularity was added by Hanke, some additional paragraphs on tiny PRs. So actually quite tiny PRs, but clarifying certain parts. There was some work by Ramon working on adding the profile info to the pipeline configuration page on the preferential usage of containers, closing some issues that were long out there as well. And James updated the template sync docs on the website following the new branching procedure that has been recently introduced. So there's new stuff coming in from there as well. So quite amazing how much has actually been done. To be honest, I didn't expect that much, but team was really putting a lot of effort in. Thank you everyone involved. Yes, thank you for that. I'll probably do that. Small children who are having a complete meltdown outside my door, so you can hear some background screaming as it's going on. Right, I will quickly share my screen. We'll go to the framework group. Right, so it's been a bit of a spread group. I think it's been the smallest group, probably, by far. And most of us working in the framework group have also been heavily involved in most several other projects as well. But we have, it's kind of a nature of the framework tools, I guess, as we're kind of involved in everything that's going on at once. So as you all know, I've talked about it already. NFCore tools, we had a release last week and then we collected lots of bugs on Monday and then we did a release yesterday. And today we've collected some more bugs. In fact, found some bugs that I introduced yesterday and found some new ones. And so this is a bit disappointing, but we've got another patch release now, 13.3, which we're about to do just before the talks, but the GitHub tests have slowed right down again. So you guys and modules are pushing so many CI tests. So that will probably go out tonight. That fixes a bug, which was there all along since the version 13, 1.13 release, if you run the linting with minus, minus release, which is done when you try and merge to master or you try and push your release, then it does some extra tests and automated CI wasn't running those. And there was a bug in there. So that's been wrapped up. So for you guys wanting to release pipelines means you'll be able to get green ticks and a couple of other little things as well. So a bit more bug testing, a bit more fixing. I never got to do this. It was a bit of a pipe dream really, but I'd actually get to build very much new stuff, but that will happen at some point. The template sync we'll see is hopefully fixed now. We've fixed it three times now, I think, in a week. So hopefully on the next template sync, it will actually be synced, but it's not even gonna run this time because we haven't changed anything in the pipeline. So you guys won't get automated for requests opened on this release because it's purely the linting code and stuff. Mathias has been working hard on the website and databases and scraping stuff from APIs and writing Slack bots and all kinds of stuff, which is, I'm quite excited to hear what you've got to say about this, but I guess you're going to do more detail about that in a second. So I'm not gonna talk about that too much now. Nathan, I don't know if he's still with us, has been helping out a lot talking about PEP, and yesterday wrote this new tool for us, basically, to take in PEP formats and spit out a flat YAML format, which we could be able to ingest into pipelines. That's quite exciting. We also talked a lot about ID names yesterday and he acted on some of my suggestions, basically, so now we're hopefully gonna be able to have a system where we can write NFCore config files in pipelines and use a RefGenie URL, which points to a static asset, and that will resolve to an S3 path. And that S3 path should be relatively stable, but might change over time. And this way, by using the nice short RefGenie asset identifies, it makes it much, much easier to maintain. So that's super cool. So basically, all the backend stuff is now coming together for RefGenie integration. So we can hopefully start the transition from AWS IG names to RefGenie in the near future. Yeah, and I had a tab open about adding RefGenie support to NFCore tools for the entire three days. I've never actually got to work on it, but it will happen, I promise, soon. Otherwise, spent most of the day reviewing our people's pull requests, to be honest, personally. I know that there's several others in the group who are doing the same. So lots of template sync pull requests and stuff like this. And I need to take my hat off to Patrick fixing the getting the method seek pipeline ready for release. I've been trying to do that since November, I think. And he's finally figured out the problem with the software installations, which I just thought was a completely intractable problem. So I'm serious kudos, appropriate there. Yeah, great. That's the framework group, I think, unless I missed anyone. Harshal, over to you. Yeah, I would just like to say that everyone has just, it's just been awesome at these past three days. It's been nonstop. We've learned a lot about DSL2 modules, about what we're doing right, about what we're doing wrong. Hopefully you guys have too, in terms of how the outlook is in terms of how we're going to be attacking DSL2 and trying to sort of really make this work. And it's just been amazing watching. Now, all of you guys, like Gisela summarized in the pipelines group, just stitching these pipelines together now, from end to end. Anthony started on Monday, saying that we've got these modules, let's get a pipeline together, and just the whole reusability aspect, to see that happen in the space of three days, when it took me, I think, three months, two or three months to get the first RNA-seq pipeline out by myself, but now you can sort of see the benefits of getting something like that done in three days, it's just immense. Seeing the successful next low terminals messages is just really, really cool. So yeah, well done everyone, and just huge effort that everyone has put in over the past three days. Something like this would have probably taken weeks, if not months, to do, to put into perspective. And yeah, so the DSL2 group was probably one of the largest. We had some people sort of dropping in and out of discussions and other stuff. But I think over the course of a few days, I'm hoping Mateus has got some numbers on this because I don't. We've probably added about 40 modules, 30, 40 modules or something. I mean, the project's page is just, so we added 15 on Monday, 15 on Tuesday. We've only had two today, but a lot of that is because we just physically can't review them, the tests aren't running. So eventually we'll get to these. There's 15 waiting there. There's another 21 in progress. So it's just been crazy the past few days in terms of adding modules. Also done quite a lot of other stuff as well. I won't go through each individual module because there's just too many of them. But yeah, thank you everyone that has sort of stepped up, got involved and decided to help out with this. Hats off to you. I know you are. Other stuff that we've also done is, we've had a lot of conversation about test data because we would like to standardize it as much as possible, but for anyone that does bioinformatics knows that standardizing test data and VALFORMATS is probably one of the most difficult things to do. So we're doing fairly well. I think we've got a very small set of test data that is working on, to give you, to put into perspective, there's a very small amount of test data that's working on sort of 100 files and a hundred different modules now. So it's being reused and it can be reused. We just need to be careful how we maintain it. And we just don't start duplicating data all over the place just to factor in edge cases and stuff. There's probably better to just amend existing data, rerun the tests. And so we have a very nice finite sort of test data set. There's also been conversation about writing a specific workflow to generate the test data. We've had issues where BAM files don't have read groups, for example, Carlos brought that up. And so he's added that manually, but it would be nice to be able to regenerate all of the downstream data and other stuff as well. And maybe that's one for the next hackathon or maybe earlier, let's see. So yeah, a lot of discussion about test data. Rika's been adding some test data for human stuff now. She's been making progress with that. I restructured all of the existing test data this morning to also work with or be generic enough to be able to use other data from other platforms. So we now have an Illumina folder, we have an Anapor folder and we have a separate sort of genome folder with just the genome files to keep things separated. And hopefully we can sort of maintain a similar structure when we add the human data as well. And so it sort of standardizes a lot of the paths and the file naming and all of that. Again, it's just something that has really come out of this hackathon, which wouldn't have twigged or happened anytime soon. And one other thing that happened today in terms of test data is that we now have, let me just show you the structure while I'm here. So this is now what the structure looks like. There's a genome data, everything is sort of standard denamed. There's an Illumina data with your Illumina specific files like FastQs and BAMs. We also have a specific one for Anapor for example as well. But one of the really cool things that happened today and there's been a massive sprint which I'll come back to in a second is that now we just have a single config file with all of the test data in it listed in it here in one place. And you only have to write this file once obviously whenever you add a file to the test data repository you include the file here. And this is how we've kind of been dealing with our iGenomes data and also how I ended up writing the modules.config that we're using with DSL2. It's just literally a map that you can access. And so we have it listed here now. And as a result of me having to go through all of these modules manually this morning and changing these paths, we got this working literally this afternoon. And so now in the actual files you can just access these paths. You can just access these paths as a map which means we only have to change their location in one place now which is amazing. So unless we change these keys then it would just mean changing the paths in one place. So that's been huge. And literally I think at one o'clock this afternoon I had this ready and we had a quick video call and a bunch of people put their hands up to update the modules. And we've literally updated almost a hundred modules in the space of an hour or two hours adding and fixing all of this test data which again would have taken me or someone else at least a day to do properly by themselves. So thank you everyone for putting your hands up and helping out with this. We're also going to be working on the CI tests for the modules, Edmonds looking into that at the moment. We've updated the docs as a result of all of you guys battering the modules. So we've learned a lot from that updated all the documentation. And yeah, I think that's it. But yeah, I can't thank everyone enough for helping out contributing. Just watch this space, just get involved where we'll still be on the modules channel. It's sort of fast moving in terms of how this is developing but the more people that contribute and the more eyes are over it, the better it will be and the quicker it will get better. So yeah, thank you. Yeah, I'd just like to say as well. I mean, one of our hopes for this hackathon really was to put a spotlight on DSL2 and on the module system. It's something we've been planning and discussing what for a couple of years now. And I've been quite cautious about it and can't go until we're ready and always kind of stuff. And at some point you just have to kind of take the plunge and you need to have that critical mass and that community momentum to actually push us over from DSL1 to DSL2. And I think this hackathon has been instrumental in providing that momentum. That list of available modules is like, I'm stunned by it, be watching it grow over the last few days doing NFCore modules list. And I need to rewrite that function now because you can't fit it all on your screen. So that's a really good sign. I'm really impressed. And I think this is gonna be huge for NFCore, so yeah. Right, we're running out of time a bit. So Matthias, I'm gonna hand over to you. Yes, so thank you. And I'll quickly share a bit of a statistics of what went on. As Phil mentioned, sometimes more or less this data will actually be live on the website for everything. But now this is just for the hackathon and like handmade. So let me just find the screen. Not coming like this, sorry for that. So what went on these past few days? On Slack, we had 1.8,000 messages with leading LabaMouths Phil, followed by Michelle, then a big drop to James and then, yes, but like it's not just us and NFCore people, there are also some other people involved there which is very nice to see. And for the emoji usage, it's just a reaction. It's not the emojis and the text yet. The number one is thankfully the thumbs up. So it looks like a very positive ongoing. And it's also these are just the messages inside the four hackathon channels. So this is just what happened concerning the hackathon. Yes, it's actually more plus ones because with all the different skin tones, the plus one would be even larger. And yes, this is for this. And where did actually all this happen? So you can see here, of course, modules was always there. Always the biggest. Also, Slack got angry with me with my data scrapping and limited my access to the API this morning. So I had to only have data for this until this morning. So this is just a Monday and Tuesday. But yeah, one interesting thing I find here is that it looks like people at the beginning were jumping towards modules, but also to documentation kind of like to entry points and to this hackathon while then the pipelines kind of shut up in the second day more. And also something on the website happened. I don't remember what was that. I think it was discussions about how we actually want to have this then on the website. And I just put the website here as kind of a reference of a normal, low, medium graphic channel. Yes, and then one more start about what happens here. So here are the word clouds of the most used words in the four channels. And I'll give you a bit of a time so you can guess which channel is which. There are some, it's nice to see that there are some like clear keywords for which easily identify or can quite easily identify the channel. So now resolve it, should have maybe kept it for the quiz, but it's fine next time. The one in the middle where actually, and I think that is because of, that is, I know it's because of posted logs and messages. The most common word or character is a pipe character. And that was the DSL2 modules. Channel, then to the right, we have as is the main word here, the pipeline channel. But the lower left, it is the documentation channel, just in here. And on the right, it's the framework channel, where maybe actually the PEP, I don't know, yeah. So here it was the PEP and a bit of a database talking happening. Okay, and what happened on the GitHub site? Here I don't have like the overall for all the different pipelines, unfortunately, but we can capture some activities that are based on some bigger repositories. And as Asher mentioned, the DSL2 modules, there was a lot going on. Actually, I made this screenshot 15 minutes ago in the meantime, there are four more PRs. So people are still pushing stuff there. So yeah, 38 PRs closed, 23 open, 7,000 editions and 144,000 deletions with the top committers being Alex and Asher. And, but also, yeah, and of course people, which is nice to see. So this is also just the last three days. And a similar way, a bit smaller scale for the documentation. And here I took the NF Core website repository because a lot of the documentation, the transcriptions are on all at the end and up in the website. There was also 22 much pull requests, one is still open. And 701 editions and 86 deletions. And very nice to see that we have, I think like such James, of course, the other three are all coming, all kind of newcomers or yeah, all with the work of the transcription and other documentation parts. So thank you very much for your contribution there. And for the framework, I just looked at the tools. Tools repository and there we have merge 22 PRs with, yeah, Harshall leading by a lot. And also there was a lot of release there. You know why Harshall's leading there? It's because he kept opening the pull requests to master all the releases, even though he didn't do any of the work in those pull requests. Yeah, I was guessing, sir. Smart. You can always trick the metrics if you know how. Yes, and then pipelines as already mentioned that was work on a number of pipelines. 16 issues were closed, 13 PRs were merged. Thank you very much for all your work here. It's very nice to see that we have like some concentrated stuff, but also all the smaller stuff is still getting the share of work during such a hackathon. So yeah, look forward to see these kind of data in the future on the website. And with that, I think it's over to James. Is it? No, Maxime, sorry. Sorry, Maxime, sorry, sir. Go on, Maxime, tell us about the social. Can't hear you, no. No, what do you? He's just doing it for the pingo. No, while you're trying to sort that out, maybe I'll just wrap up quickly. So yeah, I think because the social is going to go straight on. Anyway, so Maxime's intro to the social will fit nicely. Yeah, so just again, so thank you everyone for joining. The hackathon ends now, but hopefully your contributions don't. So let's try and, it's only three-day hackathon this time, but maybe let's use the next couple of days to try and wrap up. I'm sure, keep an eye on your inboxes. There'll be more reviews and more tidying up going. So try not to abandon the work you've done over these three days straight away. And of course, the more, we hope that you'll keep going back as much as possible and join us not just for the next hackathon, but also kind of to continue to contributions. Because I love seeing these stats. It's a sign of a healthy community. Maxime, can we hear you? No, can you say what you're going to say? I'm back now, so I can maybe take over. Perfect. Yeah, so yeah, we have the social next. We have a quickfire quiz and also an escape room in this person who's just general chatting. So if you're interested, stay on. Yeah, that's basically it. Who won the bingo today? No one. There was no no one on the leaderboard as far as I can tell. Am I back? Yes, he's back. Yes. So do you want to lead with the...