 Okay. Great. Thank you for joining everyone. Hopefully everyone's had a really good first day of hacking. Whether you've been going out all day or whether you've just started. I think it's been very productive based on the amount of messages going backwards and forwards on Slack. It's been very busy. I personally haven't touched a keyboard when it comes to actual code. You've kind of been keeping me busy just chatting. It's just great. It's a very good sign. Lots of activity. And poor request starting to pour in already, which is what we like to see. So the event we're going to have now is just a quick wrap up. We're going to go through the different leaders for the different project groups to kind of summarize what you guys have been working on. Just so everyone is kind of aware of a big picture of how things are progressing. So a quick wrap up and then we'll get on to it. And for those of you who are in different time zones or have time to work, keep working when this is not the end of a hackathon, you can carry on coding now. Keep adding your stuff to HackMD. Keep talking on Slack. And you might choose to use this as your check-in instead of your checkout. And then we'll try to go over the notes to my morning when those of us in Europe are getting started again on day two. And we'll see what you've done in the meantime. Right. So first group, let's see who's ready to talk. Maybe we can talk about pipelines. So if you're there. Yes, thanks. I'll talk about the progress in pipelines. It's been a very diverse group and a lot of people contributing on different pipelines. So very successful so far. In the pipelines group, there have been mostly people contributing to existing pipelines. One team also was creating a new pipeline. And also people working on pipeline maintenance like template updates, the different pipelines. So for SART, there has been some achievements on Sarek 3.0. Mainly Maxime was working on a pull request and many people were working on adding DSL2 modules for the DSL2 version of Sarek. For the single cell ironic pipeline, Sangram and Alex updated the JSON schema and open a PR with the latest template and I was reviewing it. So let's hope we can merge it soon. For HLA typing, Alex was working on the JSON schema for it and also started working on opti type modules for the conversion of HLA typing to DSL2. Regarding Mythelsig, Patrick was adding some missing modules to the NFCOM modules repository that are required for porting to DSL2, mostly related to Bismarck. And he also started bringing the DSL2 draft into shape to port the pipeline to the DSL2. Then we have some contribution to the Panginum pipeline, mainly by Simon and he was also updating the template to the newest version and solved a couple of issues for this pipeline, as well as updated to the next flow schema. For Amplisig, there were different people collaborating today on the Amplisig pipeline. Daniel Straub and Daniel Lundin and I think Emily as well. So they created a new branch for DSL2 and they've been working on porting the pipeline to DSL2. And they have also added some data to tools for the downstream analysis of Amplisig and working on that the pipeline is more flexible and user friendly with using different taxonomic databases. For the back-to-map pipeline, there were also several collaborators today working together here, summarized. Avantonder worked on a new Govins module and a new Snipsides module. For B-cell magic, I myself was working on updating the template to the newest tools and Susanna was working also in adding some working on some modules that will be later added also to B-cell magic for the porting B-cell magic to DSL2. For eager pipeline, I think mostly Alex was working on it to add requested for functionality to the PDM tools process. For the Mac pipeline, Sabrina was working on finishing the first DSL2 version draft and waiting for test 2,000 for reviews I guess here. The DR proteomics and MHC-Quant pipelines, I think Leon was working on porting them to the latest template version, they're the pull request and there are a couple of bugs that remain after merging. And finally, there was some work on a new pipeline, CRISPR-Quant, so we're quite excited to see what this pipeline growing. And there has been a template update for this pipeline and integrated functionality to generate faster file from the library CSV file. So lots of contributions on the pipeline team. It's great to see that and also several people contributing for different pipelines. Alright, Alex. Yep. I'm going to share quickly. That works quite well actually. Okay, so B worked on documentation, so the documentation team did a couple of things, mostly related to both the web page and of course tools, but partially this was more for the JSON schema stuff. So several pipelines had this parameter documentation conversion still open for them. So that's something that a lot of people are actually focusing on. James for example worked on the NFCore configs, but I'm going to show that with the individual ones for the HLA typing we have now updated the JSON schema. Sangram was working on the same for the single cell RNA pipeline and also that should also have once the PR is merged in a fully functional JSON schema plus the newest template. Sangram was working on the method seek pipeline, however, found out while doing that that the deaf bunch already contained one. However, Patrick was actually commenting recently as I saw it right now that this was actually better than what was already done on the method seek that branch if I remember correctly. So there's probably going to be one merge between the between both of these changes. Sangram was working on some extension of the sector of the introduction documentation. James added a new tutorial section to the NFCore website, which is now available under usage and developers and will be extended in the future. So multiple people already asked for something like that, that we have something where we can actually have this properly set up that newcomers or new developers can check some documentation there on how to do certain things in a step by step tutorial. And James, for example, already works on a step by step tutorial for institutional profiles since he's been doing that quite a bunch of times not the ego pipeline. Renuka Li and Johannes, sorry if I'm not perfectly pronouncing everything correctly, started working on transcribing the NFCore bite size talks. So both the YouTube version and I think also working on some kind of transcription that can be shared as text, which is a bit easy for people to catch up without actually having to watch the entire video. However, that as we all knew already will be quite time consuming. And Hanca also added some additional docs singularity, which was an open issue in the documentation. So we actually she already worked on that as well. Pretty much it. Nice. Thank you very much. I'll just find the right one to share. So, yeah, we were heading up on in the framework tools projects, which I think was the quietest one today. I sat in the ginger meeting this morning or by myself for quite a while, feeling very lonely, but it was good because I was plenty busy doing a lot of stuff. And we had people dropping in and out at various times. Probably the busiest amongst us actually working on the framework issues was Matias, who was working on the website here. So we're doing sorry for screaming in the background doing some great work with the website. So a lot of what we do at the moment is pulling statistics and things from GitHub API and other places and saving them in static JSON files, which was a kind of quick and dirty way to do it when I first wrote this code a long time ago and then starting to kind of struggle under the weight of the amount of data we have in those files now. So he's added support to have a SQL database in the background for websites, and starting to port over some of the code for all those kind of statistics and graphs and stuff into that database instead which is much more future proof. We had a really nice meeting. Nathan from the States managed to join us, which is great who's the author of a pet portable encapsulated. Oh, I've lost the last piece. We're thinking about using this is this is its own format based around an own tool sets based around this scheme and the idea is it's kind of a standardized way to define sample sheets for for pipelines and should be transferable between pipelines and in between. So you can kind of set your your pet up for your amazing data and run like a bunch of different RNA pipelines next blow CWL snake make whatever. All without having to do too much to your to your input sample sheets, and it's also standardized and it's got lots of tooling built around it so you can do things like validation. And so it's a very similar idea to what we've been working with the JSON schema for the pipeline level but this is sample level. So how best to kind of proceed with that and everything we're going to basically kind of a way forward and we're going to start testing the pipeline with that. Basically, as a starting point we'll just try and replace the current pipeline. Python script which validates the inputs. The other thing we've been doing is just collecting lots of bugs, which is why I've been very busy chatting to people. So we now have a milestone for NFCOR tools for another patch hotfix release 13.2. These are the high priority bugs that people have reported in the last few hours. So some things here that people have had problems with NFCOR create and things like this so we're going to try and tidy those up most of them are quite minor bugs I hope. So we'll try and finish those up and get a patch release out very soon. So you guys can carry on working. So that's just planning stage really new is a couple of things wrapped up. I think that's pretty much it for the framework. Right. How are you there? Yeah, hello. You're live. Yeah, it's been it's been very productive I think on them on the DSL to modules front as well a lot of people. Getting up to scratch with with how things work conceptually and also playing around with NFCOR tools. The testing and all that type of stuff which really isn't the most trivial to get your head around so quite a few questions around there but I'm hoping most of it kind of now makes sense. So we've had a couple of video calls just discussing content, any help, optional arguments and a bunch of other stuff and yeah I think everyone sort of on the same page now so I'm expecting at least 200 300 modules over the next couple of days. No pressure. So we've we've added. There's a lot been a lot of new module additions I think foreign was working on this. And with, I forget the name now, but I know he's from Denmark. So, yeah, for his phone was working on this and tests. This was one of the tricky examples where for example when you generate MD five sums for this particular module. The outputs that the module produces that had a bit of a play with pie test to look for other file content as opposed to using a physical MD five son for this. So this I believe is is almost there now. Robert added procker and VCF tool samples merge was added by you cure. And she also I believe I did this one as well but big bed bed to big bed free base. So that's the track of that one. FG by I was added by Francesco, Mike ran into an issue with Docker username settings that we generally provide by the NF core pipeline template when using Docker. So that seemed to cause an issue for him because his username. I believe wasn't set on his system and so next flows was failing with that I'm not happy and I think we'll probably have to look into this a bit more because it's a generic setting for all that of course pipelines at the moment and we also have that. On the NF core modules repo and then the next for config we use for the tests so I think we need to have a look at that. And then eventually coming in as we speak and I believe that is Max Max Maxine worry sneaking in his his late addition to the docs. So he's added now the adapter removal module unicycler. I think this was Jose that added that maybe for the viral recompile plan which is now almost all of the modules have been added for that. I think Kevin myself, Francesca and Maxine I believe have been having discussions about generating a small tiny test data set for human data, which would be quite nice to have and some modules will require at some point anyway. So the idea of actually writing a separate end to end type workflow that could generate all these intermediate files by chaining together a bunch of modules was also thrown around it just means that, you know, for example, if you need to change a test file for some reason I mean we had one earlier and one hour BAM files for the SARS-CoV-2 data don't have read groups and one of one tool requires read groups and so instead of regenerating all of the data from scratch. You could possibly imagine having a workflow that is just chaining together modules generating all of these intermediate files that you can then use as a test data with minor changes. Yeah, we also discussed the possibility of using custom scripts in modules. So at the moment NFCLE modules only hosts official tool type modules so for fast you see sand tools, etc. But there is there's no reason why we can't also host custom modules on there and the only thing that we'd really need to think about are trying to install things wherever they're required within the pipeline repository so one example of that would be the get software versions that we use in most NFCLE pipelines. We could have a modules custom type directory that has a get software versions directory and in that bin or something along with the main script that we can then install via NFCLE tools directly into the pipeline repository. The only problem with that really is that you could have name clashes between scripts and stuff although very unlikely. And ideally we've got functionality would hope we could somehow get via next well but and there are other complications with that as well because you'd have to add a ton of bin directory paths to dollar path and the environment to be able to export these scripts and stuff so it needs to be thought about a little bit more. And Gregor created an issue and discussion around that and stuff. So yeah it's been it's been quite an active day. Thank you everyone for, you know, getting involved in and doing all of this. And yeah, I think that's me. And yeah, you could just stop sharing. It doesn't matter. Great stuff. That's it for today. Thank you everybody. Carry on. If you still have time in your work they left. Go go home and bear a clock if you've finished your day. I think, remember that all of our hack and the links that we've just been going through up on the website and also in the slack channels. So if you want to recap or didn't catch anything we were just talking about go and have a read. Also, please keep adding to them if you're still working or if you're reading this later on, and we're going to come back to them and go over them again tomorrow. Brilliant to see you all and being really fun already for the first day. So let's try and keep this momentum up into the rest of the week.