 Hi. Welcome to my talk. Today, I'll be talking about the responsible use of Node.js and open source software at an enterprise level type of company. As you can see from my logo, work for Capital One, so heavily regulated. So a lot of this will be applicable to a highly regulated industry. So first about me, I've been in Capital One since 2014. I'm a distinguished engineer. My day job is I am a solution architect in our commercial banking space and over Capital One's underwriting and loan surfacing platform. My side job is I'm also the intent lead for Node.js for the enterprise. And I'll get more into that a little bit, what that means a little bit more into the talk. But in general, I help set policy and governance over how we utilize Node.js and the ecosystem of JavaScript and TypeScript within the bounds of Capital One. Other than that, I'm pretty passionate around developer experience and engineering standards and try to model our governance policy with developers in mind. Capital One, if you don't know, is a US based bank. We do have presence here in Canada and also in the UK. Our main business is credit cards, what's in your wallet, and the commercials that go along with that. And then we also have retail banking and commercial banking and auto finance. We are a tech first company and our mission is to change banking for good. So we are committed to open source software. That's why I'm here. We made a declaration back, looks like in 2014, that we would be open source first. So we've gone through a journey of using all proprietary internal only software. And then in 2014, we shifted to the open source method of getting our software built. So we have sponsorship in Phenos, OpenJS, Python, and Continuous Delivery Foundations and the Cloud Native Foundation. So in these foundations, we try to participate and contribute back when we can. Probably our biggest contribution back is in Phenos with a protocol morpher by one of my colleagues. He maintains that. We also contribute in general to open source software. However, you probably won't find out how we do that. We do it on our private accounts right now. So we are looking to continue that. And we have a couple of featured open source projects, like looks like Data Profiler. And Higia was one we announced several years ago at this point. Our tech journey transformation, like I said, we started that open source thing in 2014. And that's when we started to modernize our entire architecture around RESTful API. We declared we were going to be all in on the cloud in 2016. And by 2019 and 2020, we actually completed that. And so we don't have any data centers anymore. We don't have any mainframes anymore that we host. We've modernized a lot along the way. And being cloud first, we're all in on AWS at this point. So why this is important is as we've gone through our open source journey and everything, we've realized that in the past couple years, things have drastically changed, particularly in the Node.js and NPM space. And I think you've all seen all the things that have been happening in the ecosystem here. And these are just some headlines over the past few years of the attacks that are happening within our specific ecosystem. So I think if you attended Darcy's talk a couple of sessions ago, he went over some of these attacks, I'm going to briefly go over them again. But so we're looking at tax that fit into these types. So there's crypto jacking where a malicious actor will add code into a package or something like that. The example here is UA parser. If you're familiar with that one, and then when it gets installed or executed, it starts using your computing resources to mine for cryptocurrency. There's typo squatting, which I think we we've all learned about recently and know about just because everybody mentions it these days. And that's the just a minor typo in a package gets you a package you didn't expect. So react model versus react modal is a good example here. There's also the dependency confusion or substitution attacks. I'm going to go into this one just a little more detail because this is an attack that enterprises need to be aware of. And I'll get into that one in a second. Data harvesting is when code is executed or installed and scrapes the local computer environment and sends that data hacker. As you know, in your environment, you could have passwords, you could have other keys or server names or or connection strings that that eventually could get back to outside of your walls and then may or could be used against you. There's also ransomware. I think we all know what that is. Basically, you're blocked from executing code until other conditions are met, usually by paying someone or yeah, by paying someone hijacking where somebody actually takes over a package and distributes a new new new version with malicious intent. And then there's the denial of service class of attacks, which I put these as the author's own attacks. There's there's a bunch of authors that have recently published versions to block usage for payment or for other things. Some examples here were node IPC colors is the classic example. Fakers and other example, I think it was by the same same person. So I mentioned the supply chain attack. And the reason that this one is particularly interesting for an enterprise is that usually in enterprise, you're going to have this mirror in between the public registries you use and your developers. This is standard practice. This is where all your artifacts are stored. And if an attacker assumes this at a large enterprise, he can utilize this to his advantage. So the simple way that this gets exploited is that the attacker will find a reference to a package name. This could be in a package JSON that was accidentally checked into public GitHub or outside your private repos and it would have to be unscoped. And that unscoped package, then they search for that. And if they find it doesn't exist on the NPM registry, they know that they can get a new version in there based on this setup. So they will publish a new modified package adding whatever code they want to put in there and increment the version sufficiently so that it is sufficiently high that the mirror will automatically pull it. So when the developer queries their local mirror for the latest version, the mirror will say, hey, I don't have the latest version. There's this awesome new version on NPM registry. I'm going to go download it. And it just blindly downloads it. And then the developers get that version and then that version is cached obviously on the mirror at that point. Once the developer has installed it, either during install or during execution, the malicious code will execute. And so there's a lot of fixes that have gone into this recently. But it's still a good one to be aware because it still exists if you don't know what you're doing with your mirror and you don't know how you're dealing with your public or I'm sorry, your internal publishing of packages. So that's a high level of some of the attacks. And I'm going to go into more of the how we've learned through the years of how to reduce our risk by being well managed in this space. So while this is specific to Node.js in this talk, this is pretty much language agnostic when we get into some of the things. There's some details that are specific to Node or NPM. But in general, you can take a lot of this back to your Python, your Java, or Go, or whatever language you use. So the couple of things we'll be going through is how to be intentional on your usage of Node.js using what we call golden images. These are your container images. Setting up centers of excellence within your company to help drive cultures of responsible use. Probably at this conference, you've heard too much about Sbombs. So I will have a quick slide on Sbombs. And then the last two bullets here are around what you need to do for your developers because as you know, if you're a developer yourself, a lot of dealing with vulnerabilities or dependency updates is real toil. It adds value to the project because you're reducing your surface attack area. However, it's not a feature that you're going to make money on. So the last two will kind of go over how you can push left that detection of your vulnerabilities and hopefully be able to automate away some of that toil. And then keeping your developers trained is obviously a very important thing here because the security landscape changes constantly. One thing that we found particularly useful is having metrics and associated dashboards to raise the visibility of our security stance. So we are constantly reporting and having reports sent to us on vulnerabilities, how we're being well managed, our time to resolution, that kind of thing. So having that set up is pretty important to make sure you are in fact being well managed. So being intentional on Node.js version usage is one of the first things I mentioned. And we follow a basic life cycle stage of bringing inversions into Capital One. So there are four basic things here. There's the upstream, which is the Node.js organization themselves. We have an adoption phase where we bring it in for use within the bounds of the enterprise. Then we have our support phase, which is we're continuing to use it and we're continuing to make sure that it's in a good spot for use. And then finally the deprecate phase where when things go end of life, they have to be discontinued. So for our thing, we call this the maturity criteria. And our criteria is basically following some best practices that Node.js has recommended over the years. So for production environments, we can only consider LTS versions. We consider active and maintenance. So we don't really use current. Current is only used for starting to prepare to get ready for it to go active. So that's testing our internal tooling, our scanners, that kind of thing. But we do tend to prefer active whenever possible because that's the latest and greatest. It's getting the new features. And yeah, so we like the active one. We don't really allow odd versions to be used. And if people are using them and they ask for help in our channels and our Slack channels, we kind of say, why are you using odd version? Especially when it looks like they're using it for production code. But they won't be able to release anyway. And I'll get into that. We do accept security releases on the day of release because some of those do have day zero vulnerabilities. And so we do take those immediately and get them out for our developers to adopt. And when we do that, we supersede all previous releases in our particular LTS line. So for example, I don't know what version numbers we're at today with Node, but if 18 came out with 18.17.1 and we had already allowed use for 18.16, 18.15, 18.14, we immediately would deprecate all those and only allow for that new 18.17.1. We tend to lag minor releases by two weeks, meaning we don't take a minor release right away. Because sometimes that has normally contains features that are supposed to be backwards compatible, but we've been burned once or twice by that. But Node has gotten really much better than that since that happened. But we still lag by two weeks. And that's just to make sure there's no critical things that come out of that release. Sometimes Node.js will follow it up by a quick dot release because of some issue that's found in a library or something like that. So that's why we allow for those two weeks of lag. For moving to a major release, so when we go from like an 18 to a 20, we will provide 20 on the day its LTS is declared as available for use within the enterprise. And we rely on the lag that's already built into the release structure of Node itself. So knowing that they had current and it was in a bake period for a while and was declared active, that's usually pretty stable to go with then. However, you do have to note that some libraries in the community can still lag. They may not work, or you'll find issues with them. So in those cases, yeah, you have to do your testing, obviously. And all this is assuming we're doing testing in every single phase of this, even with patch releases. So it's not like we're just blindly, blindly doing things. I'll speak for my teams. My teams aren't blindly doing anything because I make them test. And then we do deprecate Node.js on the Node.js schedule. So 14 was end of life, 430. When people came to work on 5.1, it is not available for use anymore. Obviously, at an enterprise, as you can imagine, that causes a lot of disconcerting. There's a lot of people asking, I didn't know this was going to deprecate. I didn't know I had to move to this. And we communicate in so many different ways. And my feeling, and this is my personal feeling, is if you're in the ecosystem of Node.js, this schedule is well known years in advance. And if you're a responsible developer, you know when it's going end of life. So it shouldn't sneak up on you. Obviously, in an enterprise, as we all know, that is not the case. People delay, delay, delay. So, yeah. The next thing that we practice is we have what we call golden images. Gold images are Docker containers, essentially, that are centrally owned and built by some of our DevOps teams. The goal of a golden image is they have no known vulnerabilities at the time of their building. So if you're basing your image on Ubuntu, you're at the latest patch level on Ubuntu with zero vulnerabilities detected by scanners. Obviously, there's still zero trust here. So you still have to worry about it a little bit. But in general, our DevOps team, once they see a CVE come out on a base image like Ubuntu or Alpine or something like that, they're pretty quick to get the fix in once it's ready. They are centrally hosted, and all of our dependent images have to derive from those. And we do have checks in our pipelines that make sure that people are deriving directly from one of the golden images. Like I said, they're continuously updated for the base OS, specific vulnerabilities. These are typically pretty bare bone. They don't have much on them. They'll have the basic tooling that runs our scanning, but we're not installing special observation software or anything that on it. It's just the bare minimum. That way, they're easy to update and keep updated, and you don't have to worry about updating it because your metrics vendor released a new version of their particular system. So it's up to the teams to work their dependencies into it in their derived images. So then for each language at Capital One, we do have a golden image for that language runtime. So those are updated based on that maturity criteria. For Node.js, it's a maturity criteria that I mentioned. However, for other languages, they may have a different schedule or different versioning mechanism. .NET is patch Tuesdays. We just know that after patch Tuesday, there's going to be a new .NET image. And then, with all that tooling, our scanning and stuff that's embedded in them, they are configured for enterprise use and compliance at that point. There is an effort, I think, in the community more so to look at the distro less images. And the reason to look into those is they really help you decrease the vulnerability surface and the associated maintenance of containers. Container maintenance can approach the same maintenance as a dependency update in a package JSON if you're not careful. So distro less images help keep that surface area very, very lean. So we also set up what we call a center of excellence around our languages. Node.js. So at the beginning, I said I'm the intent lead for Node.js in JavaScript at Capital One, and this is my group. This group, we meet on a regular cadence. Currently, we're at monthly, and we'll probably be moving to a quicker cadence soon. But the main goal of the center is to government recommend the best practices and policies for Node.js and by extension JavaScript within the enterprise. So we will come up with the best way to publish into our internal mirror. We'll document that process, how to integrate it into our internal pipelines and that kind of thing so that people don't have to go discover how to release a Node.js application. Basically, it's a pattern as code. They'll just pick it up and configure it for their particular use case and go. And then we may have articles like the best version manager to use, why we would use NVM over something else, and there's reasons and things that we go into and all that kind of stuff as well. And the COE itself is comprised of subject matter experts across the company. These could be people with just passion in JavaScript or Node.js, or and we do require at least one member from all of our major applications or platforms that are usually utilizing Node.js or JavaScript. And this is because for them, when we come up with a new policy, they're going to be directly affected. So we want them to have input into that policy making process as policy needs to be made. And policy could be that maturity criteria I talked about. It could be libraries that we want to that we're discouraging use of based on other findings and things or just trying to keep up with the industry on standards. Some other main responsibilities and there's a bunch. We do approve the Node.js versions as they come in. That's part of that maturity criteria. I wish I could say it's automated. It's not, but we'll get there someday. Like I said, we set the policy. We drive what we call our enterprise tech backlog items around Node.js. So if we were to have a log for J situation with Node.js, this is how we would effectively get the work to the teams to remediate that issue is through our ETP process. We promote the safe and responsible usage of everything and make sure that people know, you know, how to identify open source modules that are better maintained than others and that sort of thing. So we want to make sure our developers are well educated when they go to select a package that they're selecting one that will work for the future or for the foreseeable future depending on what it is. We also promote contributions into Node.js itself and into open source modules, external capital one. So any modules we're using, we want to encourage giving back to the community and this group helps with that process of identifying the projects to get attention by our developers. And then there's a lot of work with other groups within the enterprise to drive for standard ways of consuming and utilizing Node.js. As you can mention, as you can imagine with capital one, we have a variety of languages we use and each language has its own little quirks or exceptions to how to run in a pipeline or something. And this group is the one that interfaces with our tooling groups, our DevOps groups, and CI CD groups to make sure that we're doing it in a way that's industry acceptable, I would say. It's not always standard but industry acceptable within the enterprise walls. And then the last thing is we own the NPM public registry presence for capital one. This is that promised SBOM slide. So we effectively think the SBOM is an artifact build that's required. And it allows us to do many things. We can use it to provide a usage lineage of things that are in use, obviously, that can assist in our vulnerability detection. It also allows us to do audits over our software releases and provides visibility to what packages or technologies that developers are choosing in their individual applications. So we feed this into reporting systems so that we know how many applications are using Fastify, for example. And we know exactly who owns those applications at that point, where their GitHub location is and everything so that if there was a log for JS type of issue in this ecosystem, we could easily pinpoint who needs to make updates. And then it also assists in the detection of unsanctioned use. And that's when licenses don't match what we're required to use. A lot of this stuff is also just caught by a lot of scanning tools anyway. But this is like the secondary check. But we're still learning our way through this one. The other thing that's pretty important is educating your developers. So continuously investing in the development of your developers around security best practices in the code that's written. Being aware of the top 10 OASP and their common remediations is important. And just knowing that the security landscape changes daily. So that top 10 list is a point in time. And the next big thing could be right around the corner. And obviously, all developers aren't going to be familiar with these things. It's up to that Node.js working group, the Center of Excellence, to help them identify and be ready for things like this as they come in. And then, I mean, the last thing, and we drill this into our developers more than anything, is keep your dependencies updated. It's probably the biggest challenge that we have as an enterprise is convincing people that that's going to be your easiest thing. If you even get a couple dot releases behind, it's a bigger effort than being only one dot released behind. So just keep those dependencies up to date. In one of my groups, I actually said anytime you open a PR, that first commit is your dependency update. And right before you submit that PR, it's another dependency update. And that guarantees that at that commit, it's at least up to date. The next thing is how to evaluate packages before use. And so we don't like to constrict our developers to a set of specific dependencies. We allow them to look in the open source community, find what fits for their application, and then bring it in for use. So training developers to use lookup services before adding dependencies into their projects, basically doing a basic due diligence here. But looking on the registry, how many downloads does it have? Going into GitHub, seeing the star rating, looking for CVV details. We're pretty excited to invest. We don't like this socket has safe npm right now. We really want to investigate how we can start using that within our development or similar tool, because that could really help developers and do it more naturally as part of the normal process of bringing in a dependency. We also want them to evaluate packages through the release cadence on GitHub. Our issues being cycled, our pull requests being brought in, that sort of thing. There's a bunch of projects out there that are very popular packages. By the way, if you look at them, have a very long history of pull requests sitting there for months, many months, and you're like, what? And then same with issues. And some of those issues, you read them and go, this could be a big deal, or sometimes you're like, nah, whatever. And so just having that gut feel of a package is a good thing too. And then obviously using industry standard tools to scan dependencies. A lot of that can happen during your PR process. A lot of tools plugged directly into GitHub. GitHub provides its own tooling for this as well. And then there's a bunch of other vendors that will plug in and give you nice little reports in your PRs. One thing we do do is we do block our releases based on CVE levels. So if a CVE exceeds a certain threshold, that PR can't be accepted and you can't release until it's remediated. Obviously some of the CVEs are, you know, they may not apply or that sort of thing, so we do have an exception process obviously to get around some of that if it's a critical release. But in general, since we're a bank and we deal with people's money, we like to release zero known vulnerabilities at the time of release. And so we rely on our tools to tell us that more than anything. And then as we're using these tools, so there's obviously, you know, the basic vulnerability scanning and security scanning, but there's also your static code analysis is just as important. Linting can capture a lot up front. Not necessarily related to vulnerabilities per se, but at least catching things that could bite you somehow later down the road. Like I said, we use the tools integrated into GitHub to block our PRs based on some of those scans and we do rely a lot on the auto remediation of some of those tools. So those tools can do PR straight back into your code with fixes for some of those vulnerabilities or dependency updates. And that's one of the things that really reduces that developer toil in the end. It's as easy as accepting a PR at that point and developers like to accept PRs. So the most important thing here though is you do have to have some testing in place and it has to be automated and it has to be trusted. So you can't have that one flaky test that that's kind of sometimes it works and sometimes it doesn't to really trust your automation and take those PRs as they come in. One of my projects, the way it was federated, one team would not take any PRs. So they would be having stories in their backlog to deal with their vulnerabilities. Another team had the decent tests and everything and they were much faster in delivering at that point. And then just knowing how to use MPM and some of the things it provides to protect yourself as well is important. So using only the trusted packages in the ecosystem, a couple of these have been taken over in the past like I mentioned up front and I think the community self regulates here pretty well. But using things that have a good community around them is very important. Using lock files and locking diversions, very important these days to protect against some of those supply chain attacks that I mentioned up front. And then having some monitoring around the network and process activity on your Node.js processes. On your dev machine, if you notice your CPU is starting to run hot when your servers are just sitting there, you've got to kind of look into that and make sure it's not doing stuff it shouldn't be doing on your system. The big thing is don't publish your private code on GitHub accidentally or on purpose. This one's an interesting one, but that's at least that one supply chain attack. And then for produce packages, apply an own scope to them. So make sure that even your internal packages should have scope at this point and be applied for all your internal as well as your external packages. The OpenSSF has an MPM best practices guide which has a lot more detailed information on things to do here and I'd highly encourage you to read that if you get a chance. And then when you're actually publishing to the public registry, they've made a lot of changes here that kind of protect us automatically. So I think two-factor authentication is now pretty much mandatory for almost everybody. So using that, create and own your own organization on the registry. That's kind of where you own that scope. Both you can register your internal and your external scopes up there so they can't be squatted on. And if you find that there's a someone who has your name and you can prove it legally, there are ways to reclaim it. So that's a good thing to know. MPM has a way to kind of get that back to you. And then like I said, use scope packages. That's basically it. So that's about it. So in summary, the vulnerability landscape obviously is changing constantly. Being well managed really helps reduce risk and keeping your developers up-to-date on security practices and responsible usage is important. And like I said before, having reliable and automated testing to trust the tools that help reduce your developer toil is essential. And then having that dashboard and monitoring in place will give you the actual insight into how well managed you are. And then if you're publishing packages internally or externally, just knowing those best practices is pretty important. So that's all I have. Thank you for your talk. One question. You mentioned that you kind of like educate your developers to make smart choices and do diligence when it comes to picking dependencies, right? Just your gut feeling, does it matter? Is it a factor that an open source project is community-based or built by a company? No, I don't think it matters. As long as it has support is what we're looking for. However, we just discovered one package that got removed from the registry and they hosted on their own CDN and we're encouraging to not use that one at this point. And I won't say which one it is or anything, but the fact that they're not on the registry kind of makes you wonder why they put it on a CDN, their own hosted CDN and why they're not being a participant in MPM itself. Have you felt the need to move to like a paved path with like a platform at any point or have all the initiatives just been slightly more ad hoc and depending on what the team needs and the team priorities versus centralizing it or doing a more top-down approach? I think with the Center of Excellence is how we kind of get those pay pass out there for our platforms and applications in general. I mean some have been built years and years ago so they have to do like that one-off work but anybody that comes up brand new, they have a good starting point and we have templates and things that they should follow just out of the box. We're starting to push a common prettier format, common ESLIN across the company, all that kind of stuff so that teams don't have to argue about semicolons or argue about tabs versus spaces, all that kind of stuff. We just tell them this is what you'll use from here on out. Do you have a best way for your decision to be communicated to everyone? I wish. That is a challenge in the enterprise kind of like what I said you know people were surprised 14 was deprecated on 430 and we have been we have Slack reminders in every Node.js, JavaScript, TypeScript, React, Angular, you name it and it was in that channel as a reminder every Monday, Wednesday and Friday. Obviously too much noise. Our OSPO, our open source program office does send out a newsletter and we put announcements in there and then we are working with our CICD pipeline to add deprecation notices directly into the pipeline so that builds will go yellow, they won't fail but they'll go yellow initially and then eventually they would go red and fail. So we are working to help with that communication and then the enforcement of that communication. Thank you. Well thank you very much.