 Hello everyone. My name is Harshal Patel you probably know me from the NFCOR community. And so today, I thought maybe it'd be a great idea to have some sort of video that we can use for people coming to contribute to the hackathon in a couple of weeks time hopefully you guys signed up it's going to be it's going to be awesome. So the idea is that here is that to record a video, which we can then use as a reference for anyone that would like to contribute to the hackathon because the major theme is going to be around DSL2 and contributing to NFCOR modules. Just to make the barrier of entry a little bit lower and also it's possibly to have this video up as well on on YouTube is a bit more persistently for, for others that wanted to see how to how to go about this process. So I've recently joined Secura Labs as head of scientific development. And I was at the Crip before but now it's completely different challenge, awesome environment to work in lots of new and challenging different things hopefully a lot of this will be fed back into the NFCOR community as well which is, which is great so let's get started. So basic requirements in order for you to be able to contribute to NFCOR module so NFCOR module sorry I didn't introduce it in the slide before, is a repository that we've created that can essentially contains tool wrappers so basic command line tools like BWA index BWMM Sam tools index and so on fast QC. So the idea is that we want to be able to host these DSL2, next load DSL2 wrapper scripts that we can then share across NFCOR pipelines and also within the, the next load community so in order to be able to contribute to NFCOR modules. So basic requirements so obviously an operating system where you have some sort of command line access, a really, really cool blog came out recently in fact yesterday I think it was for using next flow with Windows DSL2, which is what I've been using for a month and it's absolutely amazing. So some sort of software management so generally next flow works by by using containerization like docker or singularity in order to make things reproducible. And also you can use use condor as well, although it's slightly less reproducible because lower level dependencies change over time. Ideally you would have either docker or singularity installed locally or in your HPC environment or wherever you're running these commands. And then also a code editor to help you. Obviously edit code and then make things a bit more seamless in terms of the way that you're saving stuff maybe even some of these editors have get integration which is really cool. So that's an extension pack specifically for NFCOR with a bunch of extensions. This is for VS code at the moment where you can get a bundle of extensions which would be relevant to the way that we work on NFCOR we think are useful when we're editing code on NFCOR. So you can see my little face underneath what I tend to use the windows singularity condor and and VS code. But again it's completely up to you how you go about this and what you use. In terms of software requirements, you would need get installed because everything on NFCOR is virtually done via get in terms of contributions so you would need that installed locally. Which is ridiculously easy to install with with a single line like that you can use use W get or curl. You can even install it in a condor environment if you if you want to. But it's really sort of low level. It requires a low level of maintenance. Once you've really got installed you can even self update and so on. You need NFCOR tools which is our Python package to to help make things easier in terms of maintaining the framework on on the pipeline side on the module side. It's also got a bunch of additional functionality that allows you to create pipeline templates create module templates, lint for best practices and so on. There's a whole heap of commands that you can use there, and we're constantly developing and contributing to that to make it to make it even better. You will also need Pytest workflow for this particular instance to contribute to NFCOR modules because we use that for all of the, I guess you could call it unit testing that we're doing with modules so Pytest workflow was our choice of being able to test whenever someone adds a modules to NFCOR modules we associate that with a minimal test data set, and in order to, in order to make sure that things haven't changed across time and so on. What you can then do is you can use something like Pytest workflow for example to test against MD5 sums that you would have created relative to the outputs of that particular module. If they're changed why have they changed and it just gives you an opportunity to sort of be a bit more stringent with the way that you're updating these modules and also to have tests to make sure that these files exist and so on. The way that I generally manage NFCOR tools and Pytest and other sort of linting sort of dependencies is just to have a separate dev environment for NFCOR. So I've just pasted that in here for you to see where I've just got NFCOR the latest version and Pytest workflow and store that in an environment and then whenever I want to use these on the command line, I just source this environment and they were available for me to use it's quite low level in terms of maintenance. If I don't want to I can remove the environment or I can install another environment, or I can even install dependencies within this environment once I've sourced into it. So it could be you have a file like this, you do condit and create, then you can source into it. I would definitely recommend upgrading NFCOR tools to the latest dev version because, as I mentioned before it's constantly involving and so the module template and all sorts of other things that are in the dev version of NFCOR tools will be different to the latest released version. And as much as we'd like to do more frequently releases realistically we haven't been so the best way to get around that is to periodically run this command. These are all copy and pasteable by the way I'll make these slides available so you can just copy and paste these commands directly. And then you can install them the dev version of NFCOR tools to to then you know create your own modules and so as I show you a bit later on. We've got loads of docs, we've got loads of videos now they're accumulating over time so have a look again these these are all linked here so you can you can have a look just by directly clicking on these links or going to the appropriate web pages. So there's a lot of documentation it'd be great if you can have a read of that. And if there are any issues then you know, you know we try and maintain and be you know as up to date with those as possible report them back. And then we can sort of keep on updating them so that so they are current enough. We've had a numerous DSL to base talks as well bite sized talks about adding test data pitas workflow which I just mentioned module development and so on I'm giving another one next week about how you sort of start writing your own DSL to pipelines. So maybe, you know, it'd be worth tuning into that but again these these will be available on YouTube as well in the future so if you want to watch them later on down the line you can if you want. The first thing to do if you want to contribute to NFCOR modules is to make sure that the module isn't already on NFCOR module so we've got a command for that, called NFCOR modules list, which is part of the NFCOR tools package. And that will just tell you all of the modules that are installed NFCOR modules I mean the other way obviously would be then also for you to be able to go direct to the GitHub repo and see there as to as to what's listed but this just provides you a nice command line interface to do exactly the same thing. You check open pull requests and search issues on NFCOR modules to see whether that someone's already not working on that module because there can be overlap in terms of module requirements, where one pipeline would be working on one or require module and so would So we would like to avoid clashing in terms of development and so everyone you know it just doesn't duplicate contributions and tries to organize things a little bit better. And this is kind of the philosophy and the approach will be will be taking at the hackathon as well where the first thing you would do is look whether the module has been contributed. If it hasn't, then the next step would be to create a new issue with the module and assign yourself to that issue this is quite important because in order for us to know that someone's working on this module. So say for example someone else goes and wants to contribute the same module and they find an open issue with for this exactly the same module if you haven't assigned yourself to work on that module then we won't know that it's being worked on and someone else will assign themselves to it and it may end up again duplicating work so it'd be great if you can assign yourself so it will help organize things a little bit better. So the first thing you need to do, I won't go into the nitty-gitty details about this but the first thing you need to do is obviously get GitHub set up and have a clone of the modules repo locally that you can then you can contribute to. So the process is to summarize is basically you need to you need to fork the NFCore modules to your own GitHub account, and then you clone it from your GitHub account locally. You, you set the main NFCore modules repo as your upstream. You only have to do this once when you first clone the repo, and then you create a local branch off the master branch of NFCore modules to then contribute and make your changes to. The right size talk that Alex gave a while back about how you can attempt to start setting this stuff up, but once you've got it set up, it's generally quite, quite simple. So in this case, when you want to create a module, again we've got commands specifically for this so NFCore modules create will do that and the way this works is that we've got a modules template in the NFCore tools GitHub repo. We're constantly evolving that as things change and standards change and we're adding new functionality and stuff, but we've got a minimal pipe modules template in the modules in the NFCore tools repository. And what we're doing with NFCore modules create is pulling that down and replacing the module name and other things like the author and, and other information that you might require in order to create that module. To run NFCore modules create so it's a bit like it's we're using ginger on on the back end to do that. And it works relatively well so we only have one modules template and then you can use on NFCore modules create to do that. So what I'm going to do is I'm going to try and do a live demo, which could or could not work let's see how it pans out. There are two steps here in terms of test data and stuff but maybe we can just work our way through it and, but overall I think you'll see that there's only six files once you run NFCore modules create that should generate six files and this functions and F maybe redundant soon so it could just be fives very soon. But you only really need to edit four of these files in order to contribute a module. And that process now so if I come out of this presentation. Right. So, for this particular talk. I asked Maxime and Rike for a module that they needed to add to Sarac, that I could use as a demo and so they sent me this stroker somatic module that they've written up and integrated into the Sarac pipeline. So, again, this will be similar to how we do things in the hackathon as well where we have loads of modules that have already been written, but need to be contributed and this sort of provides to get a low barrier of entry for new beginners because the modules already been written. It's just a case of figuring out how to add the module and, and the test data alongside with it. And so if we can, you know, branch that sort of work out then it will, will make much easier that way. Right so this is the module that we want to add. If I open a terminal, what we have, I have a shortcut to the modules repo that I've got clone. I'm currently on the master branch, but again I've got another shortcut or an alias where I want to create a branch off master called, this shortcut. So now we've just got a clean branch that's that's been created off the master branch of NFCore modules. I've also got another shortcut shortcuts are really cool because it just means you type less, which allows me to source. So if I do grep NFCore condor, I've got this bash aliases file with a bunch of aliases, essentially all that command is doing is just activating that dev environment that I explained in the presentation. So it just means I don't have to type the whole thing. So now I should have NFCore, the NFCore command available for me to use. Let's do NFCore modules create this. There we go. So now we know the name of the tool is stroke somatic. So put that in there. So really cool is that we're we're we're querying the bio containers API I think it is. So if, if, if possible what tools will do is it will find an existing by container that begins with stroke or so you can see it's automatically found the the bio content by container packages and the condor packages that need to be substituted in the module so you don't have to go and look for it. Let's set this process resource labels. So these are just general setting for resources, we'll just set it to low for now. And then a metamap. So a metamap is essentially sample information that you need to propagate through the pipeline along with any other files so in general you would need this sort of information so the most basic form of information in that metamap would be the idea of the sample or whether it's single end or not or whether or what the strandedness is for example. And so in most instances you will need it in some instances you won't where for example you're creating an index for a genome which is completely independent of any sample information. You may not need that meta information but in this case because this particular tool is a variant caller. It will be part of the flow of the pipeline in terms of passing the sample information through. And so we will need it so let's put yes there. And like I showed you the six files that have been created. We need to only edit actually four of these. So if we do get status. And we do a diff on there. So this pie test modules is essentially just a list of the modules we've got an NFC modules and that's what we use. That's what pie test uses to essentially slap up all of the modules and figure out how to where where all the files are to do the testing and stuff. And so you can see the only difference there that it's done is this automatically added this job systematic in the correct place in this file in alphabetical order. So we don't need to do anything more with that file. But what we do need to do is, is now add our module in so if we just if I just open up the code in this directory, you will see there's some green bits here where we've added files. So here's a stroke a module that we've just created. So this, like I mentioned has been created off the modules template when you use NFC modules create. It comes with loads of to do statements and other stuff that hopefully will help you write this module. So what I would need to do is because I want to base it off this module what I'm going to do is I'm going to clone this locally. Let's do a code. We've got a terminal here as well. So let's do a doubly get. We've got this. We've got the module that the maximum recap written locally now. So let's just call it something else to make it a bit more informative. Let's try and load it up in here. Right. So now this will allow us to compare. Let's just minimize this terminal. This will allow us to compare that the modules together the one that we just created off the template, which we now need to change, and the one that they've written. So let's just get rid of all these to do is for now because it's just bedtime reading really. So this bit's the same. This is just boilerplate. We've got the meta ID, which is the same there. What they've put is a process high. So let's change that there, which means that this particular process has higher resource requirements. This stuff again is boilerplates you want me to change. And in fact, a lot of this stuff will be removed when we move to a more native next load syntax, but maybe I'll talk a bit about that a bit later on. So this again, we can remove and of course creators found exactly that the correct containers and stuff we want. So we don't need to play around with that. And now the more important things. So again, this is exactly the same and more important things are how to stage the files. So, because this is the template again we don't want that will copy the ones that they used into here. So what we would normally do for this sort of reference data file as well instead of providing this as a tuple. You would provide them separately because it just makes it easier for you not to have to join channels and stuff you can keep them independently. The output files that they've defined. Let's copy that across. According to guide the NFC guidelines, these would all have to go in their own channels. So let's split these out now as well. And this is the old syntax for the versions stuff which has been changed on the dev branch. So we will use this syntax now and just get rid of that. This is just from the template which we don't need. So let's bring it all together like that. I'm going to admit statements here. So maybe it's probably better to just prefix with VCF, TBI, these, TBI, that's indels. So we've got indels, TBI for the indels, SMVs, TBI for that. Okay, great. Some more guidelines. Delete that for now. And now we're onto the script section that they're using. Again, these are old options from the old modules template. So we don't need that. We don't need that. This is some custom options that they've added to their tool. So let's copy that across. So that's been dealt with. And this prefix again is fine in the modules template. So we don't need that. In fact, you probably wouldn't append custom names like this to the prefix. You can actually maybe do that another way by the name functionality. We'll just stick to whatever was in the modules template. Now, obviously in the template, again, we've got a sample sort command, which obviously won't work here because we want to run Stroka. So let's just copy across the entire script section that they've got as well. And so now this command looks that's fine to me. What they're doing is they are calling all of these input files here, here, here, the reference faster. These options that they've provided if you provide a target bed file and running Stroka. So that's good. And then at the end here they're renaming maybe we dots there because it just gives you a more natural break in the names. So change the output files again as well. And the versions we've changed the way that we're reporting versions recently and so this versions dot text has been replaced by YAML using hair docs. And so this no longer apply please. But we can again this is one of the great things is that if someone else is added a module, you can just see have a look and see what they've done and just copy across. Because it will work for you as well. So this is the one for Sam tools which obviously we don't want. What I've just done there is replace the version command for this particular tool of Stroka that we're running. So, I think, of course, semi decent looking module here. So the next thing you do is we have this meta YAML that has information about the inputs and outputs generated by the module description and so on. So the next thing you would do is amend this file and change the appropriate values in that so so for example we've got an input for normal. There. Let's copy that out. Just actually an index file and not be like on that and then you would also obviously need to add entries for these four as an input as well. Similarly, any output files so VCF indels and so on you'd need to then have the appropriate entries in here and that's, that's one of the most common things that generally gets forgotten because it gets out of sync and stuff. And ideally we have some sort of linter for this but it's. Yeah, we haven't really got around to that because it would mean parsing all of these strings in Python and then linting against that to see whether they've been provided in the YAML and so on. And so at the moment this is quite a manual process but it's definitely worth double checking that, you know, as you update modules. And they're ready for review for example that all of these entries are actually matching what you have in this meta YAML and hopefully one day we'll use it for something that you know some sort of automation on the website or in us or elsewhere. Okay, so that's the meta YAML this is unfinished but it needs to be finished it's just a case of copying pasting adding more description and text and so on. So great so now you've written the module you've copied across whatever was written in a local module already here. I don't think we need this one anymore. You need to add some tests for this now so we already have a lot of test data on the NFCOR test data sets repository. And that is listed in this config file that we've got under tests config test data configs as you can see, we've really been filling this out. And we've got human data. We've also primarily recommend using SARS-CoV-2 data because it's tiny. There's been a lot of it around recently and also we can derive a lot of the downstream files for that we're not really being very stringent about the tests other than the fact that they they need to pass at the moment. We're not doing anything more informative about that so you know you can use SARS-CoV-2 data on tools they expect bacteria bacteria data for example as long as it passed we're happy. There's nanopore data and so on so we've already got a lot this in the modules repo. What you would need to do as I'll show you is reference this data for the purposes of your tests. So, if we go back and see. So now the files in modules we've edited and changed. We now need to edit and change the test files which have also been created by the template to make things a bit easier. If we have a look at what they look like the stroke of somatic. So you have this main script just to run the tests. Now, we need to figure out. So now we need to figure out how we can run this particular stroke of somatic process using the test data that we've got in that test data sets file. And in order to do that, what we need to do is is is prepare the test data in exactly the same way as we would run it normally and provide all the input files and then test the module and see how it works. So, great thing is that we can borrow from the already existing stroke of germline module here. And just copy across some of these files directly. In fact, let's just copy these for now. We know that we need a faster and a fire. So let's copy those across straight away. And we also need a bed and a bed TVI. So let's just have these enough now, and then go away and look and see whether we've got any files existing files we can borrow from the test data sets. So here's the Homo sapiens. Yeah, that's all SARS-CoV-2. So we're going to use Homo sapiens data here. So let's replace the keys with Homo sapiens. Another benefit of having standardized test data is that you can do this sort of thing quite easily and switch between organisms. As we've got a bed file here. So we just replace the bed file with that. And then we've got an index for that as well. So we just replace the index there with that. Right, so we've got bed TVI faster fi. Now we need to prepare this massive channel of tumor normals along with the associated indexes. So this is the metamap. I think it's paired end. So this just says that it's paired end, which is great. I don't have to do anything there. So we're going to replace this. We struck to this slightly. So we can see exactly what's in here. So now we'll be going to be using human. Illumina data and we want cram files. We've got cram files. So let's replace that in there. So the first is a cram normal. So we'll just use this because we know we need four in total. And the second set, which we can use for this particular instance where this was added by Rike for, for, for calling where you need a tumor normal. So you need independent band files and stuff. And so the testing sometimes can break and doesn't work if you don't have the appropriate test data to test it with. And so this is specifically added for the Sarah pipeline. And so now we've got tests and tests to. I could have just copied and put two in front. Just like that. So now we need to add the channels to match up with what we've got there. So the input contains all of those, which we can see here now. And now we need to add a fi, a bed, TBI. Cool. So now we've set everything up in order to test this module. We just need to test it. And the way that we're generally doing that is we use, we've written again, another tool to make this easier called on the terminal. So now we're in the modules repo. We don't need to stroke a local anymore because we've got everything we needed from it. We've got another tool called NFCore modules create testimonies. And what this does is it, it gets the all of the output files generated by this particular module with your test run would will generate. As you can see here, it will generate these output files here. And so what it essentially does is it automates this process for you and creates and lists a YAML file with the appropriate MD5 sums. We've also added some functionality which reruns the test as well, which is pretty cool because we found that we were really running it once and a lot of the MD5 sums when you actually run. You added a main script to test the module. And now we just want to create a YAML file that contains MD5 sums and so on. And so what we'll do is give a tool name. And this is standard. So yes, that is the output path overwrite it. Yes, because the current one is just something that is looking for an output file that doesn't exist. Again, it's from the template. So it's all vanilla stuff. This needs to be changed. And so an example also is there's a to do statement that telling you that you need to run this command, and that will overwrite all of this for you automatically. So let's overwrite it. We've got entry points to run pi test. It's running this particular test. Enter. So for most of this, you just have to press enter. Are you singularity locally? So that's your singularity. And now what it will do is it was, it will run next flow for you on this test file. So the output's generated by that and write a test YAML file with all of the MD5 sums. And like I mentioned, it repeats the test just to make sure that the MD5 sums are stable in the file. So it all looks good. I mean, this is a perfect use case scenario. A lot of the time this will actually break, because there might be an issue with the input files or the main script and so on. So there are ways around that you can test that locally with pi test as well. And I'll show you how to do that at the end, but this is just using NFCore tools to do this. And you can rerun the NFCore, create test YAML tool multiple times to achieve exactly the same behavior too. So it looks like we've got this tests module. So this now should be updated. So we go with actual MD5 sums. And we've got a comment here saying that the file was, and this happens a lot with GZIP files because there are things that are put in these GZIP files. That means that if you do an MD5 sum now and do it in 10 minutes, then it will be completely different. So there is an option if you're directly invoking GZIP called, I think dash dash no name or something that allows you to bypass this and make the MD5 sum stable. But because it seems like Stroker is directly creating these GZIP files, there's no way for us to provide that option to GZIP because it's doing it internally. And so what we would have to do is just test for the files existence and the only way to do that would be just to say, okay, this path has to exist. These are options that you can use. So this format of YAML is specific to PyTest workflow. It's got a bunch of options. Again, in the slides that I've got, you've got links to that. You can test for MD5 sums. You can test for file contents. You can test for a bunch of other things. You can test for whether files don't contain contents. So it's quite flexible. But for now we know we can't use this to test for anything other than the file existence. So we'll just save that. That should be it. So just before you submit any pull requests to NFCore modules, it's always worth testing again. What you can do, like I mentioned, is also use PyTest locally and we've got a command to do that. You can, if you've installed PyTest workflow in your environment, like I showed you before, you can directly call the module by the tag in the command as well, and it will just test that module locally for you. And so these are some environment variables. These I think this one is specific to this template specific singularity. But there's this NFCore modules test, which again might disappear soon in the future as we're ironing stuff out, but for now it's required for the new version syntax that we've added. So if we run this command, what that will do is it will run PyTest, and PyTest will invoke these next flow commands, generate the output files, and then test whether they match against the ones in the PyTest demo that we've just created. So now everything looks great. All of it is passing. This is, you know, an ideal world scenario. It won't always be like this. So just a word of warning. You could very well see a lot of red here. But it's a nice way of locally sort of troubleshooting problems with modules before you push them just by running the symbol command here. And what you can do if there are any issues, you can go into the directory for the module. Like here, and you'll have logs here, the log error, which is no error because it passed log out. This is just the output of next flow running. And then the outputs as well here that the actual module generates. So, here, and then the output of the versions YAML, which is just a command with Strelka. And so that's that's another way that you can troubleshoot. So, based on the slides, we've looked for keys for existing test data. There are documentation for adding test data to. Not all test data is obviously existing. And so you may need some different test data from modules. If you have any questions, ask us on Slack or contribute directly to an of course test data sets as a module sponsor. And then how to do this. And also the Pytos workflow options. So, all of the different ways that you can specify in that test YAML how to test for different things. It's always a good idea to test with Pytos locally and you can also do and of course modules lint, which checks the module against best practices that we've got in the template. So I'll skip over that for now. But great so we've got now got a module. We need to we need to contribute it back so what we need to do is we need to stay those stage those changes. I mean, I could do this in VS code as well but do it here. So, so this is just adding that module to the get an internal registry type that we've got for Pytest. Then this is the first time we're adding the module. I should have done preaching while I'm preaching is assign myself to this module, which I will do now module push that. Now modules repo. She was that two on to push the changes. So now I can create a pull request to master that closes to one to request there. So those tests that we had will also be run here Pytest workflow will figure out that there's only one module that's been changed and so it will only run the tests for stroke hopefully. We only have changed five six files the functions are enough is fairly standard and again will disappear soon. This is the module that we've just written. This is a meta YAML that still needs a bit of work but this is just just so I can show you what's going on. This is pending the stroke somatic there. These are the tests that we've just added. And this is the YAML file that tests the outputs. So, in the case of the TBI files, they won't change across time and so you can use MD five sums and stuff for those. And these tests will also now be run when. You can see here that we've got a bunch of things going on that the modules being linted. We've got code linting for markdown for editor config which checks for spaces and other weird stuff so it just makes the code. You can also see whether there's any discrepancies with the way that the module was run within these threes and sometimes you do see discrepancies. So it's great to iron them out. Generally, the discrepancies are between Docker and singularity versus condo because of the fact that it can change and it's not entirely reproducible and it's not containerized. We do see tests passing for Docker and singularity and failing for condo to fail with condo because it's not containerized. So great. Once you're happy with this pull request and everything is passing, what you can do is you can add a label here called ready for review. And you can even request reviewers if you're working with particular people. It'll be great if people contribute more to reviewing and stuff. I know it's probably a bit of a learning curve and you need to know what you're doing first. But the more people that review, it will just help turn the wheel on this type of stuff a lot quicker. And so you can even request reviewers. If you have any questions you can ask us on Slack or you can add to us here. I think there's a modules team or something. But yeah, there's some sort of team one as well here that you can add people with. And so what will happen. As you can see in the pull request now, it's ready for review. There's a bunch of others that ready for review. It just makes things more visible. If someone wants to come in and review a bunch of pull requests and they can, they can just go in and do that straight away. So these are still running. Let's go back to the slides. Get test passing on the pull request ready for review. You'll get hopefully some reviewer comments as to what to do next and stuff. And that's pretty much it. I think, I mean, this has been a relatively painless edition where I haven't seen, you know, much red flags or red flags popping up but I think you get the gist. I mean, if there are then then you would just need to change the tests or play with the test or add the appropriate test data. And once you've done it once or twice, I think I think you'll get get the idea of how to do this. So yeah, there's a bunch of stuff up and coming as well. In terms of the way that we deal with DSL to and I created a prototype pull request for this to show you what would be changing. We wouldn't need the functions NF anymore. And a lot of this stuff because obviously these are being imported from the functions NF we won't need these functions or this import statement. We won't need these publishing options anymore. So now what we will be doing something that Mahesh has been working on with on the RNA seed pipeline porting the entire pipeline from end to end using a next native next flow syntax. We can use a task extension directive to just directly pass options from a config file, like the modules config, where you use I don't know with name stroke or somatic. So task extension is this, for example, so task extension suffix is this or task extension.args in the case of passing module arguments like this. Then they would come barely from the config to the module, which means you don't need to initialize these options now all over the place in your modules sub workflows, all of those locations so it will clean up the syntax, quite a lot. So I think told me today that he's merged in this, the with name issue we had where the with name when using with name in a config it was incorrectly reporting the warning and this will be awesome because if we're using with name for everything and you get a bunch of warnings that will be confusing but hopefully that's been fixed now so we can move on to the next step of using that. So yeah, keep your eyes peeled, things will change. They're evolving for the better. And yeah, here's a link to the prototype PR and some summary of changes. Again, these these links all work so have a go at clicking on them and I'll try and share them with you. If you have any questions, the standard communications channels slack GitHub Twitter YouTube. Thank you all for listening and for your contributions and for getting us this far and hopefully see you at the hackathon in a couple of weeks. And hopefully this helps to get your contributions on NFC modules. Look forward to them. Thank you.