 So welcome to this installment of what's up with the Node.js collaborators. Today I have Stuart, who is a Node.js collaborator, a member of the build team. Stuart, before you get started, can you tell us a little bit about yourself? Yeah, sure. So I'm a software engineer at Red Hat. I've worked on Node in the past, working on the IBM platform support and keeping them going in the CI that we have. And also the shared library support that is now available. It's part of the build system at Node.js. I've recently, more recently been working on the Adoptium project. Some of you may know as Adopt OpenJDK, releasing Java binaries. But I'm now starting to work more across both projects and seeing what I can do to make both of them better. You mentioned you're part of the build working group and I know you've been working on some updates to the infrastructure that powers our ARM builds. You want to tell us a little bit more about that? Yes, absolutely. So both Node and Adopt OpenJDK and Adoptium work with many different infrastructure providers that we have. And ARM is one of the ones that I've been looking at more recently. There's the initiative that ARM run called Works on ARM to try and get, accelerate the adoption of ARM hardware and they partner with various companies. And one of them is Equinix Metal, which both of us use, both Adoptium and Node.js. And they've recently been migrating some or retiring some of their older machines, different older series of machines and replacing them with some new ones. They're called the old-care Ultra Systems. They're very large machines and we've been trying to switch over from a larger number of the older machines to smaller numbers of the newer machines. These have 160 cores on them, so we don't need as many. So that's a number of cores. You're thinking about how to best make use of them because they've got huge amounts of RAM, huge amounts of cores and most of the jobs in isolation can't make use of anything like that. So what we ended up doing was thinking, what's the best way we can make use of these? And we could just, on the Jenkins system, give them lots of executors to allow it to run multiple jobs in parallel. But the problem with that is you end up with port conflicts when you try and run the tests on them. So if you're trying to run tests in parallel, they're both trying to run. If they happen to be at the same time, you end up with port conflicts for the connection tests. So what we've actually been starting to look at is making more use of Docker containers with single executors so you end up with the TCP IP stack fully isolated and you don't run into those sorts of problems. Yeah, it's cool that we're seeing some synergy in terms of our providers across the two different language, the projects developing the two different languages. It's great to see those companies sponsoring the infrastructure of some of these key runtimes as well. I think you mentioned Docker on some of the new machines. As you said, 160 cores, those are great machines to have and we really want to take advantage of all the power we have. It's really interesting using Docker so can you tell us a little bit more about how you're thinking about how we might do that and what you've been looking at? Yeah, of course. Both Node and Adopt OpenJDK had small numbers of machines running Docker containers in the past. But obviously with, as you say, 120 cores, 160 cores is a crazy amount. So it made sense to try and use that a lot more on these machines and expand the number of Docker containers that we have running on each host significantly more. So we did some of that at the Adoption project and we're now doing Node.js as well. The first ultra system that I've configured, we've got about 15 containers running on it already and I'd look at that's running really well and there's lots of capacity to increase that further. And we have two of these systems that we can use for that as well. So yeah, we'll be certainly ramping that up a bit more at the Adoption, we'll be sort of ramping it up over time and there's lots of advantages to doing that is that to stop the one Docker container overwhelming the whole machine, restrict the number of CPUs and even when you restrict it on a machine like that to 32 cores, you've still got a lot of cores available and you're not gonna overwhelm the rest of the machine if there's a big job running on one of them and a smaller job on the other ones. But as well as the parallelism, the other advantage that you have from using the containers is that you can run each container with a different operating system potentially. So you've got more chance of identifying any bugs that occur maybe with Debian or Fedora or some of the ones we hadn't necessarily tested on before. Cool. Well, one of the things I know is that our arm on Raspberry Pis is quite a handful to manage. We have lots and lots of Raspberry Pis, which is great, but I wonder, is there any chance that Docker and the move to Docker is gonna help us with that? Yes, hopefully. As you say, you know, the Raspberry Pis, they run the 32-bit ARM architecture primarily. The main operating system you get with those is the world's Raspbian OS is now Raspberry Pi OS and they really only ship a 32-bit version of that. You can run a 64-bit kernel, but all the user space is 32-bit. You have some people that are running 64-bit operating systems on the Pi. There's a version of Ubuntu that will run on the Pi as a 64-bit OS. Again, most people are just running it as the straight 32-bit one. And most of the tutorials in Raspberry Pi that end users use, it's all based on running the 32-bit code. So the previous ARM 64s we had from Equinix Metal wouldn't run 32-bit ARM code, so they could only run the 64-bit stuff. So they were of no use for any attempts to replace the Pis. We were kind of stuck with that. What we've got now is the new Altra systems can run both 32 and 64-bit code. So what you can do is actually run a 32-bit container, Docker container, on the 64-bit host. And that will allow us to, you know, trial a bit more running the builds and tests within those containers. At the moment, the 32-bit ARM code, we cross-compile it on one machine, usually an Intel machine, an Intel x86 machine. Build that and then we copy it across to the Raspberry Pis, which are somewhere completely different. And it just adds a lot to the complexity of the whole system. And you've got the time to copy them from one system to another and so on. So if we actually start using these 32-bit containers on the 64-bit hosts, we can run basically the same jobs we do on all the most of the other platforms, which should be build and test within the same container. And it ends up a lot faster because you've accessed 160 cores of Raspberry Pis, got no more than four maximum. And so it can build very quickly and run through the same tests. And it just simplifies the whole system if we do that. And so we've got that running as a prototype at the moment that's integrated into some of the pipelines. And we need to see if we can use that to possibly reduce the number of standalone Pis that we need and therefore reducing the management overhead of that platform. That'd be really great because there are just so many moving pieces in our current implementation and then I know we're always struggling a little bit with that. So I know you mentioned that you work on the infrastructure for Adoptium as well, which is the open source Java distribution from Eclipse. Is there a synergy between the work that you do on the two projects? Yeah, absolutely. I mean, as I said, in terms of the stuff we were doing with the new alters from Equinix, I did a similar migration earlier in the year where we moved from... We actually quite a lot of different types of systems. We had the Thunder X ones that Node.js were using and we had some Huawei systems and so on all from Equinix. And they've decommissioned quite a lot of them now to replace more of the alters. So I'd gone through that process a few months ago. It meant I knew the hardware. I knew what it was capable of. I'd trialled some of these... the ARM32 containers and so on there. So it means that I can use that experience to transfer the Node infrastructure as well. So yeah, we're using the knowledge from one to the other and that sort of thing. The other thing is both projects use Ansible to set up the machines. Node has now started using AWX as a sort of automation front end for some of the tasks they do. Adoptium has been doing that for a while so I've been helping out with that side of it. Both use Jenkins for continuous integration. So I'm an administrator of both of them now so we can share our techniques, best practices and look at if there's any ways of increasing the commonality between the two projects to make things better. Both use CentOS for the builds. REL on some platforms where the infrastructure provider can give us that. We may start moving away to using REL more now that Red Hat have released the Red Hat for open source infrastructure initiative that allows open source projects to get access to REL Red Hat licenses for free, which would be good. Well, the infrastructure providers aren't completely identical across the both projects. We do, knowing each of the projects are involved with some slightly different companies they use once and more than others and we can potentially provide introductions to the providers to the other project and maybe make better use of our contacts within each project. That should be quite good for sourcing new machines as necessary. Yeah, no, that's great. It sounds like it's great to leverage that and yeah, the Ansible and the Ansible tools have been a great asset in terms of, you know, configuring the large number of machines both projects have. I guess, you know, we don't have too much more time, but is there anything else you'd like to mention before we sign off? Yeah, in terms of synergies and so on, one of the things that's coming along now, you may have heard of this architecture called Risk 5 and it's getting a little bit of traction in the industry. I know one of the big companies was looking at potentially making an acquisition in that space a couple of months ago. So we'll see how that goes. And there is an interest in Risk 5 for the runtime both OpenJDK and Node.js. There's some work going on in the background from some companies to try and do that. So we're looking at trying to see if we can get them integrated into the main codebase at some point. And again, getting hold of the hardware and the infrastructure and everything. We've already got some contacts at Adoptium that have been able to get us hardware. And yeah, that may, that'll be something we'll be looking at in the future is to expand the number of Risk 5 machines we have and to be able to support that platform going forward. Yeah, that's another great advancement. I mean, Node has a pretty good platform support and it's nice to see other ones being at it as well. Yeah, it sounds like everything else is trying to get it into V8 first, which is always a pre-ret for any new platform in Node. So that seems to be going well at the moment, so hopefully we'll get it into Node soon. Yeah, we've been through that cycle a few times so we can understand. Indeed. So I think that's all the time we have for this time. Thanks for coming to talk to us, Stuart. And thanks to all the viewers. We hope we see you next time as well. Thank you.