 All right, let's kick off. Thank you everyone. We'll just go into the final wrap up now of the NFCOR hackathon. First off, I just wanted to say again a massive, massive thank you to everyone who has joined us these three days. I have had a really, really fun time, despite being quite busy. I've had a brilliant time. I hope everyone else also goes away feeling like they've had fun here and hopefully made some new friends and connected with some old ones. It's been a really bydwyd y gallu gweld ei gallwn. Mae'n ffeithio i gan gwrs-gwrdd, ond mae'n ffordd o'r rhaglen eu hwn. Fy fyddwch democrat eu hefyd. Mae'n cwestiynau bod bod ni'n iawn i fel Yours fel winnwys yn gwnrhaun fel gweithio, a bod â'r tryn iawn yn llwydd, a yn ddrifegu i gwnor i'r cyfnod, a'r dyn nhw, rydyn ni'n ddweudth i chi'r hwn ag ydych chi'n llyfridd ar hyn, gollwch yr unrhyw o gweithio, a gydag fyddiadolol iawn. a'n dweud bod ni'n meddwl arddangos ei fod yn gweld ffawr ac y bydd yn mynd i weld yn fwyfro yn rhywbeth. Ond rwy'n gweinno fe wnaeth gyda'ch godi gweld ei fod yn falchnyddol, a'n dweud i digwydd i ymwyaf i gweithio ar y cynhyrch. Felly, mae'n gwaith bod ei fod yn iawn ymddangosio ar y gwyfod arall. Ac rwy'n gwych hynny mewn ffainter, wedyn ein bod nhw'n i'n bwysig wedi ei fod ni'n ei wneud addysgau i gwybr o'r pethu pwysig o'r f thirteen, Mae'n meddwl yw'r ysgol yma, dywedd ychydig yn ymweld. Mae'n meddwl a'r cyfnodau'r cyfeilio. Tere i'r Amylau gan yma i amdain i'r cyfnoddau neu ymddiadau yn ddiddordeb i ddweud o'r rhuniau ac yn y dddirnos yn ei gael ymddangos. Mae'n meddwl i'r cyfnoddau a'r ddodgfeydd a'r agfodol. Mae agfodol a'r agfodol yn ei gael eich gwybod yn wych. Mae'n meddwl i'r hoffa ar y cyfnoddau i chi. Felly, ddyn nhw'n ddim yn ddylch yn ddweud. Mynd i chi ddweud o'r fathion gyda'r community fathion, â y gweithio, a'r gweithio'n ddweud, a'r gweithio'n ddweud. Mae'n ddweud yma, a gennym y bydd i'n digwydd y fathion. Mae'r amser sy'n ddweud i'r hynny o'r fathion gweld. Mae'r gweithio'n ddweud i'r gweithio. Mae'n ddweud i'r gweithio, a'r ddweud i'r gweithio i'r gweithio. I'm sure I've forgotten someone as well, so thank you to everyone I've forgotten. Finally, thank you to AWS for sponsoring our dinner last night. That was a lot of fun. I want to quickly remind everyone about the NF Core Mentorships again, just in case you're tuning in now and missed this earlier. The NF Core Mentorships are a scheme we're running with our Chanzaugaberg initiative grant funding, where we're looking for pairs of mentors and mentees. We're really trying to push our inclusivity of the community. It's through our diversity and inclusivity grant through EOS programme with Chanzaugaberg. If that sounds at all interesting, if you want to be a mentor, it's a paid position. You're compensated for your time, it's free for mentees, and it's pretty relaxed. You can do it however you want, whatever works best for your pair, but it's about two hours a week for a few months. The things you can work on as a mentee are totally up to you, so it can be from very beginner stage setting up a config on your cluster right through to developing NF Core pipelines. We're really interested in bringing in people, especially who would find it difficult to attend events like this and things otherwise. If that sounds interesting, please do have a look, read the blog post about what everyone did in the first round. We have double the number of people this time, 10 pairs, so 20 people. Please do drop your name onto the application form at the bottom. I should also thank Chanzaugaberg for funding the hackathons as well. They fund lots of stuff and they also funded things like all of the nice mugs and the opons, which you guys are taking home. So don't forget them. Right, that's enough for me. I'm going to have a talk on Friday as part of the summit where I'll talk a little bit about the big picture metrics and stuff of some of the things we've done during a hackathon. But for now we're going to dig into the details of your groups and see what it is that you've been working on today. I think Chanzaugaberg is going to kick us off with the models team. I'm going to go and refresh the page. Yeah, thanks a lot. Exciting developments in the modules teams. First of all, we were working in the modules team for the ones that were not here following us until now. We were developing new modules, fixing modules tests and improving current modules in the NFCOR modules repository. We had about 30 people contributing to the modules teams so far. So many they don't fit in one but four different slides. So thanks to everybody that contributed. First of all, some quick stats. In the NFCOR module repository during the last three days we had 27 merge pull rip squares, 24 new open pull rip squares, 24 closed issues and 35 new issues. So lots of work done. We have to adhere that because sub-workflows are now also part of the NFCOR modules It's a bit of a mixture between the two teams but still worth mentioning this effort. General updates for today. Alison was reviewing some PRs and started triggering a discussion of the tasks.ex.arch that are used in modules. That was a wonderful example of a cross-team collaboration because then Jennifer updated the documentation to also add this part there. So if now you're wondering what's best practice to add past parameters with value channels or using these arcs, you can find this in the updated documentation. As for new modules, we've had a bunch of them. So we've had AMP Combi as a new module contribution, ABRA II, the ABRA II module, repeat scout module and several sub-modules of these two. Presto filter seek, Fulcrum genomics, FQG rep module and several modules fixes as well. Today FastQC fix to drop the end-on-ness check and several updates in the FastQC module and Falco and fixing the smooth container so that we can also use it in a module. Apart from that, also Christian runs some tests on how to run pipelines and modules with Docker containers in people that have the new M1 Apple chip. Here are the results of some of the tests that he did with the blast container that using this minus-minus platform Linux tag is a provisionally fix for this, but it comes with some performance degradation so it takes a bit longer to run, but it runs. Maybe it could be an intermediate solution on how to get this to run. Thanks everybody, that was a bit the update, and we can now continue with the updates of the sub-workflows team, if you like. Hello everyone, how are you feeling? Good? Tired? Want to drop on the floor? Want to do more? Want to stay for more? Now coming this afternoon and tomorrow. So sub-workflows was, as you guys know, one of the major themes of this hackathon. It's something we have been putting off for a while, mainly because we need to figure out how to implement sub-workflows. We've dealt with modules, they work, we know that. We've got lots of really cool tooling to help you deal and interact with modules, but sub-workflows was a completely different challenge because now you're not only dealing with modules, you have to now install a chain of modules. Then we need to build a tooling and all of this on top and attempt to standardize as much as possible at the same time so everyone else can start adding these sub-workflows and share them across pipelines. It's been an immense effort over this hackathon. My expectations for what we would achieve and what we actually have achieved have completely surpassed, and now I think we've got a really nice solid foundation for what we want to take forward with sub-workflows in the future. Thank you everyone for that. It's been amazing and the online guys as well. There's a number of people that have been contributing to sub-workflows on various fronts, some documentation, some on adding new sub-workflows, updating sub-workflows, and some also on the tooling as well. This was day one. Let's skip this. Here we go. Francesco was working on the UMI consensus sub-workflow. There's a bunch of stuff that he's been fixing, ironing out with that as well. He's just waiting, I believe, on some modules first to be merged, and then he's going to update that sub-workflow. Good progress on that front. Quentin fixed the FastQ-aligned Bowtide 2 sub-workflow. That was basically because it was breaking some downstream tests. The way that we're doing the CI test, I guess I should say also, is that what we want to make sure that everything on NFCOR modules is always working in terms of the tests, and it's quite important that whenever you change a module that you're also testing the sub-workflows downstream just to make sure that nothing is broken. Edmund is partly responsible for that. His awesomeness with PyTest and GitHub Actions has allowed us to add these additional tests to make sure that everything is always working. This was merged in because it was breaking something else downstream, so that's in now. Picard Mark duplicates, mammoth effort from David. We had to eventually also update the upstream module because we had to factor in input channels and change that because I believe David's now also added cram support to Picard Mark duplicates and for that we needed to have a FASTA file, the reference genome. He's updated the upstream module to take a FASTA file as input and also then reinstalled and added the logic to the sub-workflow to now test for cram as well. Camille was working on various things, loads of different sub-workflows, and so this one of these was, this came out of the chat we had yesterday about naming conventions, so sub-workflows naming them can get quite hairy because you're now chaining together loads of different modules, what's a sensible way to actually name these, and the format we eventually decided on was the first entry there, as you can see, is the file format, so whether it's BAM, VCF, FASQ, and so that way you can quite easily search for sub-workflows that take input standard files like that, then a list of operations separated by underscores, and at the end, potentially the tools that you're using in the sub-workflow, this will sort of be quite a nice way at least to start with to give us a bit of flexibility. The reason the tools was plugged on at the end is because you can have multiple tools to sort BAM files for example, and so that way you can still find, when you're searching, similar sub-workflows and just maybe change the name of the tool at the end that's doing this to make it easier to find other sub-workflows that you may want to use for this sort of thing, and yeah, we're going to build out functionality to make it easier to test the tags and stuff that we have in the Metayamo to find these sub-workflows potentially on the website. That's one for Matias, I believe. Yeah, so Camille added this, renamed this sub-workflow based on what we decided the naming convention. He also updated the Umi tools sub-workflow, and that got cut off, I believe. Yeah, so there were two more. Camille was very busy. I didn't do much coding. So this was probably my only contribution throughout the hackathon. This particular sub-workflow is quite a complex one because RCXE has lots of different tools that you can run, and this particular one will hopefully make it a lot easier, and is a good example of a sub-workflow in fact, because all of the various different tools and commands that you have in sub-workflows, for RCXE are now included as part of this sub-workflow. You can just switch out the ones you want to use by providing a list of the ones that you want to run, and it should just take care of it. But yeah, I apologise, my contributions have been dire and embarrassing. Edmunds added a new sub-workflow issue template as well to modules, right? So whenever you're submitting an issue for modules, then you should be prompted to fill in the appropriate details for that. We are also working with some other really cool stuff now. What we're trying to get to eventually, hopefully, is just testing as much as we can in any given workflow. At the moment we're mostly testing stuff on NFCOR modules, and then when you install that in the pipeline, you sort of have confidence that it works because it's been tested somewhere else. But there are a bunch of local modules, local sub-workflows, and also workflow tests. We're not really testing at the level of the workflow properly yet. We're just seeing whether the workflow runs or not. But what would be nice is if we're able to then also maybe have MD5 sums and other logic to also list the full outputs of the workflow, and that way what you can do is you get two things. You can test whether the workflow is producing the same files at the end of the run, and also if you make changes to the way you're publishing the files or something like that, you can also make sure that when you make a change, that change is actually the right one to make. So you'll see a difference. A test will fail. You've changed a file, for example, maybe a bug. If that file is not existing when you re-run the tests, then you know something's wrong, and it allows you to fix it. At the moment this is all done manually, but it'd be really nice to eventually get to the point where we're testing at all of these different levels in the pipeline. I think there's some other cool stuff that we decided to implement where you only test what's changed rather than testing everything, just to be nicer to the planet basically and not run crap loads of actions for everything. That's a work in progress, but hopefully once we've got a prototype working in RNA-seq, we'll push it to the template and we can start adopting it everywhere else. One of the biggest things that has come out of this whole thing is these new really cool commands that we can use to now create sub-workflows to create the test jammer which was painstakingly done before. For sub-workflows that's even more complex because now a sub-workflow is a chain of modules that produce a load of output files. So what this create test jammer function or command does is that it will automatically create the MD5 sums and the test jammer you need for that sub-workflow based on whatever the output files are for that sub-workflow. So you don't have to manually go in and create this file. It will do that all automatically for you. So that's going to be a major plus as well. We can also now install sub-workflows in pipelines. It will update the modules JSON wherever it needs to and so that's working as well. We can list sub-workflows and we get info and slowly, slowly I think we'll start appending these but these were kind of the higher priority ones that will really allow us to start interacting with sub-workflows and we smashed it. So yeah, great work everyone, especially Julia. Thank you. I'm surprised you didn't tell me to do one at some point. And so yeah, we use the tooling to create all of these sub-workflows that I've been talking about before other than the existing ones and now we've also managed to install all of the sub-workflows that we had in the RNA-seq pipeline. We've pushed them all to NFCOR modules as part of this hackathon. We've also installed them all directly into the RNA-seq pipeline and so it's a really nice sort of finish and completing the circle in terms of what we've achieved at this hackathon. So that's still sort of, we're just still fixing the tests on that now but it works. And yeah, thank you. You guys have been amazing. It's been awesome sort of being around everyone rather than communicating via Slack and stuff. There's been funny little conversations and laughing here and there with everyone and yeah, an in-person event, you can't really beat it. And those of you that are online, you're just going to have to make an effort to come and see us at the next in-person one. So yeah, thank you everyone. And please don't be a stranger. We need you. All of this stuff is developing. It's ongoing work. I don't think we'll ever be out of a job. We're doing important work. Come and find us. Be around us on Slack. And yeah, we'd love to have you contributing and being a part of this community. It's just phenomenal how quickly we're growing and the work that we're doing. So yeah, thank you very much. Hello, everyone. So how are you guys feeling again? Well, especially after last night, I imagine this morning was tougher. So the documentation team, as you can imagine, was very productive. So I'll skip a couple of slides to reach the interesting ones. The hackathon summary is that we have 50-plus pull requests. We've 48 merged pull requests and three ready for review. The most important part is that the contribution was spread across various repositories, the website documentation, the tools documentation, configurations even. And of course, the GitHub readme that we have, like that's the basic introduction to anyone who comes to the GitHub page on the end of code. But more importantly, I would also like to highlight that people who have learned the skills by contributing in the documentations, they are venturing out and even starting to contribute to the main next-flow documentation. So I'll mention it briefly again. So we have a group key example. This is one of the more nuanced operators we have in next-flow. And I think an example was kind of, it should have been there, but for some reason, we kind of kept putting it forward. So we finally have it. It would be great if you can review the PR or even add suggestions there. This would be awesome. So moving on to individual contributions, Louie has made the biggest number of contributions, nine contributions. And it's important to highlight that he is not attending the hackathon in person, he's online. So the group had a good mixture of people in person as well as online. And we see a range of contributions from everyone. We have MLN, seven contributions. Margarita, you might recall the wonderful dance last night. So there are wonderful contributions on all fronts. And I'll move to other contributions. And Pauline, specifically here, is the one responsible for the group key example. So this is wonderful. Thank you Pauline. And then I'll hand it over to Marcel. Hello everyone. I hope you guys are still having fun. So I have three main ideas. We highlighted, I think it's important to mention about the documentation group. So I think it was very interesting to see that, I will not respect the order, but the increased confidence and the contributions throughout the days. So at the very beginning people that were still a bit uncomfortable using GitHub or with the documentation, they were kind of lost on what to do. And at the very beginning we helped them to start with some simple issues and do some simple PRs. But it was very nice to see that every attempt that every day they were doing more confidence on using GitHub, the command line, all the repositories, not only the website, and they kept progressing to the point that we had like contributions to issues that were created during the hackathon by all the groups. It was very nice to see this confidence growing throughout the days. The online and offline engagement was very, very good. One example is what Abnaf just said about Louis. So he was online and he managed to do a lot of contributions. So Get It Town helped in us to interact with everyone, which is also a lead of this group. She gave a talk at the beginning to try to motivate the people that were in the group and know how to use the project board on GitHub and all these things. So the engagement was very, very nice. In person, of course, it's bad. It's better just like how you said. You have to come next time to see the real experience. It's much better. It's very nice. It's much easier to know when someone needs help. So it was expected that we would have this by the end. But we were surprised that also people that were remote, they were able to do a lot of contributions. I think that what I'm going to say now is what Harshaud said and I think every leader is going to say it doesn't have to stop here. So I hope this confidence keeps growing and you feel more comfortable to contribute to other repositories, not only documentation but also to modules or sub-workflows. There's a lot of work to do. So we are still going to be there. The distance is going to exist, but we're still going to be just talking to us, ask questions and email. So feel more than invited to keep contributing to NFCOR and NextFlow, to keep getting in touch with us, asking questions. It will always be a pleasure to help. In the next hackathon, I hope to see all you guys here again. Thank you. Let me skip the first slides. Okay. I'm going to summarise what we did for the Pipelines team. Is this still on? Okay. James compiled a couple of statistics somewhere. There we go. So we had in total 30 team members that worked on 16 different pipelines. So I think that's probably a quarter of the pipelines we currently have in NFCOR. And five of those were brand new and we merged something like 66 PRs. Or have ongoing PRs there. So lots of progress and lots of pipelines. Great work for everyone. For the brand new pipelines, I briefly introduced them yesterday, but I want to go over them again. So we have the differential abundance pipeline that Oscar started working on for differential expression analysis and apparently possibly now also pathway analysis. He kept working on the pipeline diagram and also started out with the actual pipeline creation and trying out modules. Which is great. And I think the NFCOR repository is now also already there. Then we have the light sheet recon pipeline. That's the pipeline for light sheet microscopy image reconstruction. And Conrad started working on the first sub-work flows and making everything NFCOR and of course the pipeline he has so far. Then we have the semi-seq pipeline. That's the pipeline to analyse the sequential analysis of macromolecule accessibility sequencing data. And Margarita is currently working on the first modules and starting the initial workflow. And then the last new pipeline I think is the tau typing pipeline. It's the identification of genes, genomic segments with genome-wide phylogenetic signals of an organism using canal tau rank correlation statistics. And Hanta Awun and Josh and Sam are working on it. Working on creating the first initial workflow. Working on the input parameters. And also started working on modules on sub-work flows. So sounds like a lot of collaboration with the groups here. Oh sorry. One more new pipeline, the viral integration pipeline that Alyssa is working on for finding viral integration sites in the human genome and she's working on extract chimeric genomic targets modules and documentation. Then we have one more pipeline currently being worked on to be converted to DSL2. So James and Jack are working on EGAR the genomics and metagenomics analysis pipeline for ancient DNA and working on samples modules here at the moment. And then we have all the rest of the pipelines and development. So first Zarek, the German enzymatic varying calling pipeline. My most important contribution today was winning the PR race against Maxime. So he has to deal with extract chimeric, so not me. And Susanna started implementing MSCI sensor for tumor only and Anders is continuing to work on the concatenation of ECF files. Then we have the liver city analysis pipeline that Luis, Aaron and Hanca are working on. It's the pipeline for quantitative image analysis of abdominal CT scans of HTC patients using deep learning and they are working on input syncing the template and then general pipeline set up steps and new functionality. There we have funk scan that is getting ready for the first release. So yay. Congratulations to all the developers there. And Jasmine, Annan and Luisa are working on the COM BGC module protocol and cleaning up results channels and AMP COM B and summarising this into a workflow. Next pipeline is the SM RNA seed pipeline for microRNA, small RNA seed analysis pipeline that Rob and Alex have been working on for known adapter sequence files auto detection of those and then lots of testing on real data text profiler that Vladimir, Sophia, James and Moritz are working on that's for paralyzed taxonomic profiling across multiple databases for shotgun meta genomics and they are working on a bunch of new modules and documentation for usage and output and I think also I've been involved in quite a few discussions with the modules team about the FASQC module. And then we have the single RNA seed pipeline for 10x genomics data Paul and Christian were merged a few PRs for new tests and new output formats and also made the output of the SC flow and the SC RNA seed pipeline compatible to each other and then we have I think the PR from Allison maybe about the expected cells and sequencing centres. They have the HGSEC pipeline for horizontal gene transfer and further work on documentation and then also preparing a PR to master so sounds like this pipeline might also be getting ready for release. Then we have protein the protein fold pipeline for protein structure prediction Athanasios and Lila working on it to provide alpha fold 2 or co-lop fold databases and parameters and then more modules and constitutional profiles to be added to NFCWAR then I think Gisella, Zaba and Azuzanne worked on the airflow pipeline for BNT cell sequencing analysis and been working on some new container issues and I think this is the last pipeline so Edmund also worked on the nascent pipeline for nascent transcript identification analysis and he merged PIN support for TSS identification so that's everything we did. I hope that not too many reviews are open I think the buddy system worked out somewhat well, I didn't see so many review requests but if there are any left keep posting them and if you see an open review be the reviewer you want to see and review it or add comments and make sure we get it all in. Yes, infrastructure we had actually our main goal for the hackathon of working more on the sub-workflow functionality which Hashell already covered, we did we also improved the test coverage the prototype with NF tests were started during the hackathon and also we had some bug fixes especially from the documentation team but also Sofia on the more website codeside were done on the website there so I think we filled all the things out we wanted to do for the hackathon but now more in detail what we did today Sofia was continuing to build these landing pages for the docs so we have nice grouping pages for the usage and the contributing sites Arran finished working on the code coverage of the sync and you can see with a 25% increase in code coverage which covers 100% so very nice, thank you very much for that he now switched over to cover the main file which is currently untested and therefore also the biggest gap in our coverage so it would be really nice if we covered that one Pion on the NFCore sub-workflows info command and also finished writing the test like I said yesterday he was working on the or finishing up the sub-workflows list command and now also the test so I just need to review it but this is for now will be the only way for you to find the all available sub-workflows so it's a very important command to have so thank you very much for that work Bruno added functionality to the NFPROF plugin to annotate each published file with the associated task and just now like really in the last bit Alex, Robert, Edmund and Phil added a custom runners for the NFCore CI for future events so we don't run into the problems with GitHub not liking us anymore but we can just run it spin up our own instance quickly and run the tests there and hopefully be there for a lot more productive we will get so much more things done Julia and especially Arthur and also with me continued working on the getting rid of code redundancy for the modules and sub-workflows command I won the hackathon and I also worked on the stats Paul thankfully based on I guess basically the command from yesterday from Phil that we should manually cancel tests when we see that we are published in one we can actually do that in the CI in the YAML files for the tests and he added that to the pipeline template so in the future this will be included for the pipelines and I also added it now in the modules repo so for that in future we should also reduce the amount of running CIs just by cancelling them automatically and this was it from the infrastructure group it was very nice to see some new faces joining into our fights to keep all these things going up and running so thank you very much for everybody who joined and put their effort into this right did I forget any groups today I think that's everybody yeah on that final note with the GitHub testing I'm kind of kicking myself that we didn't figure out that this would be possible before the event started and instead waited until the last hour to implement it but I think we've been fighting with the limit of concurrent running CI jobs for probably about two years now every time we have a hackathon it's a major blocker so I'm really excited that we managed to get that running now so in the future we'll be able to just spin up an AWS instance for a few days and anyone who is working just at the end of the day will have seen all their jobs sitting in queue and then when we got it to work suddenly everything started running at once so on infrastructure side I'm more excited about that than I should be just because it's been a thorn in our side for years great thank you everyone I think I've said thank you too many times now but thank you for being here that's all for the NFCOR hackathon we will we're basically finished up here now so the NFCOR the next flow summit will be kicking off this afternoon but all of the NFCOR summit is downstairs the same building but it's downstairs in the foyer and then one level down some stairs going down it will be very obvious later this afternoon so please wrap up now we need to clear out of these rooms in the next hour or so so please kind of take your things with you because we're not going to come back here in fact we won't be able to come back here the first talk for the next flow summit starts at 5pm Evan will be doing the welcome talk so you've got a few hours between now and then but of course registrations are already open now for the summit so feel free to kind of mill around if you've got posters you can put them up have a read of other people's posters have a chat but if you want to go home rest your legs for an hour or two and that's fine as well so really looking forward to seeing you all at the summit I hope you enjoy the next couple of days as we finish out the week I will talk briefly on Friday with some stats so you've got another day to finish up all requests and get those contributions that's maxed out and we'll also announce the winners of all the different competitions we've had then as well so keep your eye out for that alright thank you very much everyone and hope to see you at the next hackathon