 So, it's the checkout for day two of the NFCOR hackathon. Right, it's trying to send you on. Thank you all for being here. And thank you all for what has been an incredibly productive day here, I think. It's been just as manic as yesterday on Slack where just messages pouring in and the poor requests are really starting to fly today. To the extent that I've been suffering issues waiting for the get our vaccines tests to run because we've hit the maximum number of concurrent jobs that we can run at one time, which is a really good problem to have. So, that is as impressive when we're saying actually to be one of the things to have on the bingo. Anyway, let's go straight into the wrap-ups for each of the projects. So, who wants to talk first? So, do you want to go first? Yeah, no? Yeah. No, I'll be talking for the pipeline groups. Okay, do you want to go first, Maxime? Okay, then I go first. And then let me share my screen, share screen. Okay, so let's see what we did today on the pipeline. So, we have many people in the group and I think a lot of people are working on porting pipeline to DSL2. At least that's what I'm trying to do in Sarac. And at the moment, I'm still working on the same public was that I was working yesterday. And I'm about to integrate more module because I saw that the people in the module group were very productive. So, that's very good. Then let's go to the SNRSIQ pipeline. So, Sangram fixed some container issues and we are there waiting for the latest tool template and we'll make a minor release. So, I'm hoping that should happen soon because I think that you are doing like some tool release soon. HLIT typing, I don't see anything today in the HLIT typing, which is weird because I saw there was something in the HLIT typing. Let me just check the... Where is the HLIT typing? Okay, no, nothing on the HLIT typing. So, sorry for that. Then Metal 6. So, Patrick is working on the 1.6 release which will be the last DSL1 release and he's working as well on the DSL2 port. And he is currently having like some tiny issue with general error message, but it looks pretty good on the Pangemon side. So, this is Simon, if I remember well. Okay, he wants to add the Pangemon graph evaluator in the pipeline. That's good. Add a new flag to composite output file and better graph visualization management. Looks good. Amplific pipeline, nothing for today. Sorry for the cat. Back map achievement. So, today contributed with the other missing module and stitched them together. So, someone was quite productive today, I have it on there. So, two new module updating with me and citation. Another new module by Tom Léviat. Anthony is bringing module into the pipeline and the pipeline work until the sorted bump. Good. And then, Alexandre Gilles Arde, he had the BCFM pileup and call step. Pretty good. Be some magic. Gisela fixed a couple of issues for the DSL2 release and SSNN figured out how to reorganize the code and use the custom procedure. Good. Nothing today on Eager. Mag. So, Sabrina finished the first real DSL2 version. Oh, that's very good. Compress the output file. Pretty good as well. Continue work on the GTDB classification and start dealing with runtime issue of meta-battles superscripts. Sounds promising. Okay. Now we are going into the proteomics work. So, Diaproteomics and MhcQuant achievement. So, today there was an issue in the template which was solved by Kevin and Phil. So, 2PR are still open, but the tests are passing and there are some tiny bugs that are fixed with the PR and there is no review for Metabo igniter. Okay. And then Chris, Chris Perquant, this is a new pipeline. So, indexes, mapping and tuning process has been added and working on an issue with a software input record. Lot of work. So, back to you, Phil. Alex, did you want to go next? Yeah, I can do that. Okay. So, the documentation group, which is rather smoke group, actually was working on multiple things today. Parts of this was already done yesterday, but I think some of it was resolved today. So, for example, the single-cell RNA-seq merging of the JSON schema was pretty much finished today because the foundations for that were actually done yesterday. Sankrang was actually updating some of the issues we had with dependencies in this pipeline. So, finally fixed that. So, this is now in the depth branch. Tor was working, no, that was yesterday actually. Hanke was working on adding a tutorial for adding institutions to NFCore, so to the web page. There was a long open standing issue there that already had some description how to do that, but this was now added in a proper tutorial format and will probably need some polishing, but nevertheless, it's now added to the web page and we can actually work on it. James was actually working yesterday and today on a step-by-step tutorial on how to write an institutional profile. So, this is something that takes quite some time, as which is very similar to the last bit that was worked on today, which is the bite-size transcription. So, Renuka, Junho and Johannes were working on that yesterday already, but still today. So, they finished already four now and are actually preparing this to be added to the website now to the event page on the four bite-size talks. So, there are open pull requests for that now and which are hopefully then going to be reviewed very quickly and then merged at some point. And at some point, Hanke also worked on several smaller issues in the documentation on the website, for example, and open pull requests for that. That's pretty much it. Nice. Cool, I'd quickly go on to the framework group. We've been working hard and quite a lot of people have been dropping in and out and with combined with other projects. So, yesterday was... I can't see a screen fill, there's like a white line in the middle or at least I can't. I can't either. Let me try again. I switched it across my monitors and maybe I broke it. Can you see it now? Bingo. Can you see it right now? Steve's a bingo. I see, I've shared the one. Sorry. I thought you just meant you had bingo because I've messed something up. Not a bingo for showing the bingo. Okay, now can you see it? Okay, good. Right, sorry about that. Yeah, yesterday was spent with lots of helping people and loads of people telling us it's all just broken and myriad of different ways, which is what we wanted, so that's good. And today, Kevin and I have been hard at it, fixing a lot of those bugs. And I'm very pleased to say that we just released version 1.13.2, which is the second patch release for this, and very basically hoovering up all of those bugs you guys have been found. So that's the one that Leon found, which was mentioned, some better logging in there, so that if you run stuff, it gives slightly more intuitive feedback. EC Linting from, from user editor config files, just loads of little things, all the ginger stuff for making new pipelines to be cleaned up and syncing is hopefully fixed. And a few other little goodies thrown in. So the syncing is actually still running because normally all these run in parallel, but because we've saturated our GitHub actions, they're actually running one at a time. Just kind of annoying, because one of the big fixes I did was to allow them to all run at once without GitHub telling us that we were abusing GitHub. So I can't actually test whether that fix worked or not. Anyway, one at a time, they seem to be mostly working. These failures are expected. Yeah, and a couple of nice little things like this is the output from NF Core Lints and I did a little tweak. So now the summary little table is, the border is green if you have an overall pass and it's red if you have an overall failure just because sometimes it's not immediately clear, just another little bit of gorgeous. I actually found a little thing that's up in an issue for the rich library as well, but we use it to deal with this and what a wonderful library it is. Right. So that's been most of what I've been doing and I've been pestering Kevin to help me with that. And I haven't done that yet. Matthias has been working hard on the website. So that's the MySQL backend, which has been put together and he now is starting to pull in all of the data. So there's stuff coming in from GitHub, stuff coming in from Slack. Now, which is the new one, which is basically pulling, it was starting to replace the stuff on the stats page piece by piece into a database backend and then we'll be able to do core stuff. Haven't actually spoken to Matthias recently about how he's just going, but I hope originally it was that we might be able to pull out some stats about what we've done specifically during a hackathon and have things like, who's done the most poor request reviews and stuff like that. So just in case we get that working before the end of tomorrow, you should bear that in mind when you're thinking about what to do next. It might be good if you do more poor request reviews. There's been more discussions on Pep and Nathan who's the author of Pep who's been doing lots of cool stuff, specifically written a whole new package for us, which takes in a Pep and spits out a single flattened validated file as a YAML file. So you can take in Pep in different ways, check that it looks right against the schema and then have a standardized output, which will be a nice standardized route into taking Pep for pipeline inputs. So that's really cool. We talked quite a lot on the IG names channel about Ref Genie. Again, there's been lots of new additional functionality as Ref Genie and the hope is we can replace AWS IG names with this. And that's looking really exciting actually. Both end users basically, it should be pretty much the same. I think pipeline developers, it'll be mostly pretty much the same. But in terms of maintenance, at the moment I have to manually maintain everything on AWS IG names, which means it doesn't happen very much. And the maintenance on this is just gonna be so much easier. So that's super cool. And also for end users, the integration when we get onto writing, it will be super great for running your own custom genomes as well. It's gonna be much, much more easy. Much easier. Yeah, I think that's everything for framework and that's like, forgot anything. Very true to dual RNA seek. Your poor request is incoming. Right, Jose, you're gonna talk about... Yeah. Yeah, I'll too. Let me share my screen. Let me see if I can find, Okay. Can you see it? Yeah. Okay. So as you see there are a lot of people involved in these models, DSL2 group. Please, if you are not here because I saw that many people is hanging around in this lag, please put your name here. So we have it listed on the slides. So here we have some of the new models that have been added or that are work in progress during the day. So for instance, sequence, well, maybe I just also put here. So maybe this will be more informative because it's more numerical thing. So we have, yesterday we have already 15 poor requests done on Monday. Many of them are new models. Some of them are changes. So for instance, you see here that there are several new models. Here are the ones that we submitted today. So there are seven that have been submitted today and there are 10 that are ready for review. So we are, I think we are doing a great job. And as we were listing here, so there are some that could be maybe interesting for some of the pipelines like Sequenza, CNB Kit, PyCard, Proka, and Showbill. I don't know if, well, there are here some, some people also listed some of the issues that they got. So if you are interested, you could take a look on the slides. Something that many, we have seen today, well, this is at the end of the presentation, but it's that people were like having problems with the PyTest in local. So Harsil has nicely updated the readme and now it's listed there, how you have to run the PyTest in local, as you will see. These things are already from yesterday, these models. And so if you have not listed your model here and you would like to list it here, it will be nice. Maybe we could have something like models by day because otherwise it will be difficult for tomorrow, but if not, I guess it's okay now because at the end we will have everything here. So, okay. So, yes. Okay, yes, about the model testing, we have achieved some goals. So as you can see here, I don't know if it's Rick or right, but has searched for a suitable, sorry, a suitable human data set. And he's trying to convert to FastQ and okay to polish this data so that it's available for everyone. Also, this will have been discussing this morning. So we need to add the read group to the test data and also this litters to discuss a mechanism that will run the test when the models that are already implemented that are already using some test files. So this test should be relaunched to see whether they still are working or not. And we were discussing with Harsher, Kevin, and some more people how to do this. This, I think we have already discussed it yesterday that there is no need to have this anymore in the docker profile. And as I was saying before, so this is the pull request now template, sorry, has like a place to close the issue. So you don't forget to do it when you submit your pull request. And also, as I've said, Harsher has added to the read me how you have to run or how you can run your test locally and I have taken a look on it and it's very nice. So I think now people will not struggle anymore to do it. And of course, if you have any problem just or if you see in this pull request that something that you don't see it easy to do, just drop a comment. And this, I think we have already discussed it yesterday. So it's something that we have still to figure out how to solve it or whether this will be in an X-Flow fix, let's say. So I think more or less this is all. I don't know if Harsher wants to add something. Thanks, Isaiah. That was great. So the only other thing really is that in our discussions this morning in the check-in call, we realized that basically we'll have to restructure the test data again to deal with different platforms. So the moment it's all Illumina data and it just sort of twigged in my head that eventually I'll need to have test data for some Nanopore stuff I'm doing with Virorecon and we will need to organize that properly. And I'm assuming there will be people using Nanopore software that will need that too. So we will need to reorganize the test data at some point to host that. And Yuki Ryan has kindly offered to prep all of the data for us like a hundred reads and put that on the repository as well and hopefully generate a lot of the downstream files that we can then use for those modules as well. So that'll be really cool. And yeah, I think this test data thing has become a bit of a pain because little things like not having read groups and stuff means regenerating that file. And then that file may depend on other modules may depend on that file at some point downstream. And so we need to figure out how to streamline this really and we have numerous times tossed about the idea about having a workflow, a next-flow workflow specifically just to regenerate the test data if we need to with maybe a different flag or something like that and make everything easier. So hopefully that's one for the future. And yeah, I think that's it. I mean, it's been again, another very productive day. Thank you everyone. I think we need to review some of those poor requests and get them in and then we'll probably beat yesterday's record and hopefully do even better tomorrow. So yeah, thank you everyone for getting involved and helping out. We can discuss this on Slack as well, but you reminded me that there was some chat on Twitter a couple of weeks ago about potentially collaborating on test data with other communities, the SnakeMate community for example. So it's also something to come back to. Well, another edge case I found today actually, RAV found it was our naming convention for modules. So whenever you run NFCore modules create, Phil added some functionality that will look on, this is the Galaxy API for the container addresses automatically. And so if you say SAM tools forward slash view when you run NFCore modules create, that function will recognize that you need to pull the SAM tools container from Bioconda. So it will do a query to the API and check and then insert the container definitions automatically for you in the module file. But there are some tools that are named funcly which won't be recognized with that command. I mean, maybe we could have a separate option for that to say actual name or something like that if we want to standardize the names an hour in. So there's one called sequencer dash utils, but we're not including any special characters in our names at the moment. And in fact, I think the NFCore create command fails if you try and include a dash in the first part of the name. So maybe a good workaround would be to have, I don't know, actual Bioconda package name or something as a separate option to NFCore modules create that will still fetch the container for you as a workaround. And that means we can still sort of have standardized names for the modules. Otherwise things will just get, I mean, it'll be a free for all if we name things according to how things are named on Bioconda or anywhere else, I think. Yeah, so that was another edge case that I thought was interesting to bring up. Rob found that out. Put it into an issue, yeah. Yeah, we need to put it into an issue. Oh, someone else is at the same issue today. Maybe... It'll be easy enough to answer. We just need to have an issue. Yeah, so Ericsson, maybe Ericsson, would you mind creating an issue for that on NFCore tools? Yeah, and we can look into that and maybe add an initial ground. I think that's the path of least resistance there. Thank you. Nice. That's me done over now, beer o'clock. Beer o'clock. Yeah, sounds good. Anyone have anything I'd like to say, which I think would be useful if they've come across today, which I thought other people might also be running into? Raise your hand now, we can have a quick general discussion on that. Brilliant. Yeah, OK, I think that's all for today then. So same, you know, the drill, same time tomorrow. We're going to kick off again 10am CET, European time for the check-in again, get the day started. If you're working in other time zones, don't let us stop you now. Keep going and try and be vocal on Slack because although it's going to get quieter, you're not alone. So please keep chatting and let others know that you're around and maybe, you know, stop for a coffee and jump onto a jitzy chat with one another or something. If you're not too busy watching basketball, of course. And yet keep adding your stuff into the HackMD files so we'll see what you've done when we come back to my morning. And also, have a think, everyone, about what you want to do on your last day of the Hackathon. We need to kind of really bring everything together tomorrow. And as we've seen, those poor request reviews are starting to pile up already on day two. So let's try not to leave too big a stack of them tomorrow if we can help it. Of course, if you want to carry on hacking after the Hackathon officially ends, that's totally fine as well. In fact, we are very much recommended. Right, I'm going to stop waffling. I'll see you all tomorrow morning.