 Hello, everyone. Welcome to today's Bitesite stock. My name is Panciska Bohnert. I'm today's host. But before we go into the talk, I would like to highlight an event. As you all know, we had a hackathon not long ago, which was hugely successful and good news. There will be another one coming up in March 2023. And this one will be a completely virtual event, but we were planning to have local communities come together and join forces. So if you want to host one of those, please contact us or at a pull request to the website where you can add your own site. So this enough of announcements. Now I would like to hand over to Phil, who's giving today's talk, and he's talking about configuring linting, right? Yes, I'm indeed. Yeah, so this is me stepping in doing a bit of a last minute talk again. So sorry for the late advertisement about this one and about what the talk topic would be. We realized, actually, I've been telling us for a couple of weeks that we were running out of talks, but we only really realized yesterday that we didn't have anything scheduled for today. So we had a look through a list of suggestions and this was one which had been asked or requested a couple of times. So I thought I'd jump in and try and try and talk everyone through NF core linting, what it is, how it works, but most importantly, how to configure it. And this is going to be particularly interesting for both people building NF core pipelines, but also people working with NF core tooling outside of the NF core ecosystem, where maybe some of these tests are not really relevant to you. But it's good for everyone to understand how it works and how to configure it. And so, in the spirit of last minute decisions, I'm going to be live demoing everything. So my apologies in advance if everything breaks horribly, but let's let's see how it goes. Okay, hopefully you can all see my screen. Wonderful. First things first, just a reminder about where the documentation is for all of us. So this is the NF core websites and if you head on over to the tools page up at the top here, you find the section for NF core lint. And this is documentation and basically it pretty much tells you everything I'm going to tell you. So if you get distracted by a dog or a neighbor or something. Or if you forget everything else I'm going to tell you about then just remember this, where is documentation on the website, and you can read it and it explains how everything works. With that said, let's kick jump in and do some do some demo and walk through what we're going to be talking about today. So just before we started recording I created a new pipeline. And so I just did NF core create. And which made me a new empty pipeline, and then I updated the modules and I cleared out some of the template stuff so but so but we're kind of up and running with a clean pipeline. And if I do get log you can see I've just got two commits from, this is when we started preparation for this talk, and just resolving the late mornings. Now, what is code linting or linting? Basically, a code linter has a set of rules about how codes should look and how it should work, and it checks the code and gives you passes or failures or in a series of tests. Typically, when you talk about code linters, you're also talking about code formatting. So for example, you can have a JavaScript or a JSON code format or Python code, code format and these have linting tests, where they look at the code and they can also reformat the code for you. We're not talking about that today. And of course, linter doesn't do anything to do with code formatting. It's just a set of rules. And it's a way to inspect code for standardization. The linting has been around for enough course since the very start of the project and is a fairly crucial part of how we work and allows the community to scale. Because we do so much work with carried review where other people are reviewing code that you're writing, having a set of automated tests means that we can be a bit more confident that things are adhering to the principles and not breaking stuff without having to check really really carefully. So without automating the things that we are commonly asking for, it streamlines this process of code review and makes the general code quality better. And this is pretty much how the NFCOR lint tools have evolved over time. It started off with a couple of minor things. And every time we've come across the same thing a few times in a pull request, we say we should write a lint test for that, so we can automatically test for this thing again in the future. Another thing that's really important about it is it makes sure that everybody stays up to date with all the latest things, guidelines and rules for NFCOR, because these change over time. We have updates in the NFCOR template and we do template synchronization and all this stuff, and that rolls along. And what happens is if you keep running NFCOR lints on your code base, over time, as we're tooling updates along with the template, you'll start to get failures where before it's passing and that's because it says it checks the code in your pipeline against the template and in some places it says this should be the same as the template. And it's either out of date or you've edited it and that's a bad thing. And so it also kind of forces everyone to stay up to date and then sync. So that's why we have the tooling. That's why we have NFCOR linting. And if I do, if I do NFCOR help, you'll see that it's one of the commands here. And the basic one is NFCOR lint. You can also call it just specifically for modules so it's NFCOR modules lint and that calls just a subset of those tests but generally speaking you just do NFCOR lints and it just checks everything. I'm currently sat in the working, my working directory is the root of the pipeline. And here it's run 182 different tests and they all passed and nothing failed and that's great. When I push commits and open pull requests to GitHub and things, we have automated continuous integration tests, zero tests, and they run the same command on the code. And so that means when you open a pull request, those tests automatically run and you get a green tick or a red cross saying whether this is good or bad. We have different statuses that the tests can have so they can be passed, they can fail, which is obviously is bad, and they can fail warnings. With warnings they don't actually fail with CI so you'll still get a green tick but it tells you in the text message or any checker text that there's something which is maybe not ideal. And hopefully when you're creating pull request you also get a little comment automatically added which summarizes the results in the body of the pull request so to give you visibility, especially for these warnings which might kind of fly under the radar otherwise. And what we're particularly interested about today is this category tests which are ignored. I'm going to come on and tell you how to ignore CI tests. So first let's make something fail. So if I open up VS code here, this is my, my pipeline that I've just created, and I'm going to dig in as different, lots of different tests which I could make fail but I'm going to start off by doing something very simple. We've got the read me file here, and one of the tests checks that the NFCOR read means look like NFCOR read means, and all NFCOR read means have these badges at the top. I mean, if I go to RNA seek, go to the, go to github.com and look at the main read me on kind of when you load up the repo. It's got these badges along the top which says it works with next flow and you need this version or whatever. So we have a lint test that checks these badges are there and correctly formatted and consistent. So for example, it says the minimum version of next flow you need. You also define that in the config file down here. And it's quite easy to let these two fall out of sync. And so we have a lint test which checks this button then read me. Simple. So I'm going to break it by deleting those. It's gone. Markdown file is still totally valid. It just doesn't have any of the buttons at the top. So proof of principle NFCOR lint now should hopefully fail. Okay, it didn't fail. It gave us a warning close enough. So this. Now, really important about this is when you see these warnings, like you get a summary text saying didn't have a minimum. So you get a minimum version badge. But there might be it might be that you don't completely understand what the message is or it's a bit unclear. And most browsers will handle these the hyperlinks in the terminal and where it says read me that's the identity identifier for this test. And if I hold, I'm on a Mac. So if I hold command, I think on Linux and Windows its control. I can click that and it's going to open up a tab in my web browser. And it's going to go specifically to the test ID called read me. And then this is where we have longer form documentation about this specific lint test. And here you can see it says it needs to have an extra badge and it should look like this. And it should be the right thing and it should have a bioconda badge and everything. So this is where the long form documentation about these lint tests are. You can also find it if you go to tools and then somewhere under tools. I always forget. Anyway, you follow that link here and it tells you all about it. So I noticed that this is quite specific. It's not just editing the whole file, but it just this part of the read me file is checked. And there are other ones as well. So if I go into, let me see assets multi qc.config do. Then I should get. I think a failure about editing the multi qc file. And no. Okay, I managed to do something right there. But anyway, there's certain files where you can get a failure for putting stuff at the start. But if you stick it at the end, it will be valid and things. And that test is, I think, called files unchanged here. So I should have picked one which is actually tested. But so there you go. So these ones you can add extra stuff and see the gignore file, for example. So, yeah, these are all different lint tests and they're documented here. So obviously I could fix this read me file by putting the badges back in, rerunning and template, et cetera, or reading the documentation and seeing what's required. But in this case, maybe I don't want to do that. So maybe I'm building outside of NF core. And I don't want to have the NF core badges at the top and I want to do my own thing. And that's fine. How do I go about doing that? So I need to ignore this test. And the way I do that is when you run NF core creates, you get a config file for NF core. This one. NF core. I am always as a hidden file. So depending on how you're looking at your files, you may need to show your hidden files in here by default. We just have this. It just says this repository is a pipeline. This is to do with working with modules. But I can add a new key in here called lint. And under lint, I'm going to give the name of the test that failed, which is called read me. And I'm going to set it to false. And I'm going to hit save. That's what it is. I'm going to rerun linting now. And now it says pipeline test ignored. And it just says it didn't run this test. And so because of that, nothing failed and everything's fine. That's basically all there is to it. If any of these lint tests you don't like that you, you are sure that it's doing what you want it to be doing. Then you can just say ignore this test and it will be ignored. Some of these, so this is quite a blunt tool. I've just ignored this entire lint test called read me. And so for example, if I go in and I change, let's do files exist here. So I'm going to delete the editor config file. And of course, then it should throw a failure because this is a required file. And it's not there. So it takes me here. So I could do files exist false and that will disable the entire lint test. And that's, that's fine. That makes everything work. But it's a bit of a blunt tool because now it's not checking for the presence of any files. So it's allowing me to delete that one file, but it's now not checking for the presence of any of these files, which is maybe kind of overkill because it was just that one file that I care about. And now some of the lint tests then allow you to provide a bit more information. So in this case, what's the file I deleted editor config, instead of just saying false, I can actually give the name of the file there. And I think it has to be a list like that. Now, when I run this again, hopefully it should still pass. But now it ignores just that one specific file. So this isn't possible for every single one of it and of course in tests, you have to check the documentation. But certainly files exist and files unchanged, which are probably two of the ones which come up the most frequently. And you can specify exactly which files you want to ignore. And then you keep the value of the rest of the lint test there, which checks all the other stuff, which is generally kind of a good thing. Right. 1314. That's my 15 minutes. It's a very short and sharp bite size talk this time. Very specific about this one kind of thing. But hopefully that's helpful and hopefully this will be a useful resource for anyone coming back to this in the future, asking about how lint configs work. If you have any questions or feedback or suggestions, then please shout and I'll do my best to answer any questions if there are any now. Back to you. Yeah. Hi. So everyone can unmute themselves if they want to and just ask the question right away. I don't see anything coming up for now. I mean, you don't have to ask now. Obviously as usual, you can come to Slack. Either ask you questions in bite size or in the help. And as usual, I would like to thank the Jen Zuckerberg initiative for funding the talks. And of course, Phil for this very short notice talk today and the audience for listening. I would also just say on Slack that there's a Slack channel called Linting, I think it's called Linting, which is the place to go for any specific questions about Linting when you're generally confused. Yeah, I should have known there's always a Slack channel for everything. Thank you, Phil.