 Hey folks, AMA for core subsystems is starting now. Hello. Thank you. Thank you for suggesting that we do this. I think this was a wonderful idea and really productive. So this is gonna be the format for it. We hit up a bunch of folks and a bunch of folks volunteered in the issue to introduce a subsystem that they find themselves familiar with and then we'll open up to questions, sorry. So we'll get started. Folks here also want to say if you are familiar with the subsystem or you've worked on it, feel free to answer the question. Even if maybe that subsystem is mentioned here attached to someone else, you can't see everything. But first, Anatoly, would you like to introduce a couple? Hey everyone, I'm Anatoly. I'm a member of the TSC. I contribute primarily to these systems. So I'm mostly able to answer questions related to HTTP2 compatibility, which is basically the layer that allows you to run HTTP2 servers the same way you would run HTTP on servers. Also familiar with timers. I don't think I need to really explain timers, hopefully, in process. I mean, really what I'm familiar with is task use. So if you're interested in how much the course, if you're interested in how MQ microtask works, if you're interested in any of those kind of finer details of how the bamboo functions, happy to. Thanks, Anatoly. Next up, we'll get Girish to introduce a couple. I'll come to you. Hi everyone, I'm Girish from IBM. This is Anatoly. So please view the areas which I regret is one of the most difficult things that are snatched off of the users' application and provides you with the content that is accessible or the content that is available. Thank you, Reggie, for coming and understanding the subsystems of the API, the aspect view, the policy guide, and including functions, et cetera. Basically, we improve the diagnostic of the people who work in this department. Embedding is basically to embed the users in the other media application. I must run a new list to go through with the specific tasks. Trend process is basically to spawn new noses that we notice as a parameter and effectively communicate to the child for the data transfer. And have some things that go through one to the child process. Shared library is mostly related to embedding. If you want to embed the users in other applications and then to load a shared library, the key difference would be running the board as part of the process, as opposed to running as a standard application. If you need, I'll set an assembly, I have some understanding that we can do that in this architecture and only the new turn of the node can be created between the two nodes. So, I'm happy to talk about that if we have any questions. Hi, everyone, I'm Mateo, probably know me. I don't know. As you might know, I am more or less with some other folks maintaining node streams, which is probably the most legacy piece of very nice code that is in Node.js. And on a long quest of trying to make this a little bit more maintainable and a little bit more usable and not really such a mess. Maybe we'll see where we are at. Everybody relies on those behaving on any certain way. So, for anything that we are changing there, we break 10 things. So, that's my life, 10 bucks for everyone fixed. I'm also, because of the reliance on streams, I'm also one of the, I got up to maintaining HTTP, more or less, and I have been involved in HTTP to more or less from the beginning. So, yeah, that's me. I'm also very much interested in asking caterators. So, I've added the caterators to streams and I'm interested in all the stuff. So, if you would like to talk about those, also, shout a question. Yeah, I have basically been all over the place and I mostly work on things that I actually do. I don't know how to turn this. Okay, so, yeah, I don't know, sorry. And so, yeah, I'm doing that. I'm famous for anything, it's work, I've read some notes, but yeah, don't really want to say anything more. I'm Ruben. So, yeah, I mainly maintain UTL. I'm pretty much re-broke it from scratch, I think, when from version eight on. In fact, it took kind of more performance, especially with like UTL inspect, which was like super slow when you did the console log and would slow down your application because it's in front of scope. Well, that's not the case anymore, happily. And yeah, I'm trying to improve the output also when you put something weird in there so it can pretty much reconstruct a lot of information, no matter how you manipulate your objects. And, yeah, buffer. Okay, I don't know the native site pretty much at all. What I mainly did is like all the big write functions at some point to replace them. Have like first implementation was super basic, finally stuff, you don't want to look at it. And then assert, well, I also pretty much rewrote that. And it has a relatively decent comparison function in there which became a UTL is deep straight equal. So that's the functionality to compare to different objects and work pretty well. Then, yeah, I don't know, like a nicer way with asserts rows, better arrows and stuff like that. So, yeah, I don't know what to do with that. I think it was better arrows and stuff like that. So it was pretty horrible for a long time. It was like, don't use this stuff. But since it's in no core, I started to work on it, especially also we use it in core obviously for every single testing. And I think now it's pretty usable at least. It's still, it could still be improved further, but that's a different story. Yeah, Rebel, I would say that's probably the worst code in whole core. And, yeah, it's super difficult to work with that at all. You use it probably while working on a core because you want to have a rebel. Yeah, we should replace that thing. Pass, so the pass module is very basic code again to have pass manipulation or to join different passes and so on. You don't want to look at the code, but it's fine. It works fine. It's just, it's fast. And in this case, the code is also written well. It's just more complicated because it's very basic to be so fast. Yeah, and benchmarks, anything that has to do with that core, I do a lot of performance optimizations. So, Tobias, Tobias, Tobias, Tobias, yeah. Hi, I'm Tobias, I mostly work on crypto, probably one of the least popular modules in core, but certainly necessary, mostly because we're bundle of SSL or electron balance, I think it's Libre SSL by now. Not entirely sure. So we do have a bit in crypto module and it is used widely, but only by a few modules, which are then dependent on by far more modules. And we do try to support all new algorithms that are popular on the web. Everything that web crypto does, but the problem is that OpenSSL, which we depend on, does not really support all new standards. So we have to try to keep up with them and they have to try to keep up with standards. So I added a lot of features to the crypto modules in the last few years and I added a lot of bugs and I fixed a lot of bugs and probably spent more time debugging OpenSSL in terms of the node itself. So hopefully I can answer any questions here about the crypto module. Thanks. Super. Now we have Sam. Hey there, I'm Sam. I worked a bunch on the cluster module and TLS module lately. You're not, we're supposed to define top people with modules do. If you don't know what cluster is, don't use it. If you are using it, I actually kind of addressed it and I bet your experiences can find me sometime when you talk to me. And lately I've been working in TLS bunch. I added with the Shagakia bunch of other people, TLS 1.3 support. And I have anything but an encyclopedia of memory. So you can ask me anything. I don't promise to answer anything. So we're done with it. It was now time. We have two mics. I'll bring them to you or might help each other out but bring them to you. So there's a question already. Yeah, so this would be a question for Tobias. So both Chrome and Electron obviously use boring as a cell instead of open as a cell and take that version I think two or three years ago. Is there any consideration of searching to boring as a cell instead? That would make it painful with browsers. Where is the problem? Well the problem with that is that boring as a cell as Google themselves write on the front page of the boring as a cell or the concussion or whatever it is they actually write that it's not supposed to be used by other companies or people that Google themselves because they don't guarantee any stability. They don't make any, at least they don't publicly make any guarantees or stuff. And I think there were problems with some newer APIs that are fully implemented such as some key generation features. I'm not sure whether that changed. I think Shelley looked into that. We tried to keep it as competitive as possible. So we added a couple of C++ magic to make it work mostly, but we don't, so we are trying to make it possible to swap it out but we are not trying to replace it right now. Okay, thanks. I'd add two things to that. One of them, yeah, they don't have an LTS support there. So it's kind of a non-starter and they don't have FIPS, which for people who don't care, they don't care, but if you do care, it's really important. So it's not. What was that last one? The boring as a cell doesn't have FIPS, unless someone left correctly. Yeah, there's no FIPS support. Okay, neither does the non-FIPS support right? No, it does at this point, but we won't pass to summer until for a few months, it's going to be at least a few months gap. Open as cell FIPS is still in flux, I'm going to start looking at it soon, but yeah, for Open as cell-cell, there's going to be a gap, which means that no, we'll have a gap with no FIPS. It's unfortunate, but it's okay. Hi, I have a question for Matteo. So, Matteo, you said you were interested in async iterators. Yes. And so I just wanted to be able to let you know that like my colleague who also works for Boku, Valerie Young, right here, purple blue. Val actually wrote a bunch, not all, a lot of the tests for async iterators. A small number, okay, whatever. She's a contributor to Cessna 6.2, so she knows a lot about the tech, so she's a good resource on the JavaScript side, but I have questions along, like, can you elaborate on what that means? Like, are you trying to, like, yeah, I'm just, async iterators are amazing. Yeah, a couple of things. First of all, last year, DWR to add a not an async iterator support to streams and write, like, this main, something like that. We moved, removed that from experimental. So right now, not streams are async iterable, so you can iterate, you can consume a stream fully using async iterators with the full back pressure support on that. We have recently added a readable dot from method so that you can pass in an async iterator and get a stream out of it. And we are going to look into supporting async iterable for with the async iterators in the pipeline, in the stream.py function so that you can process a stream of data just using async iterators. So basically, you can take a file and then combine it with an async iterator and then pipe that into another file. And all of these writing, more or less idiomatic JavaScript without having to deal with the legacy node, structured legacy, but that's my take on it. The node stream API that we can really change. So that's more or less my current working plan. On top of this, we are currently talking, myself, Benjamin, and a bunch of other folks about having some async iterator support on top of an async iterator. So you basically can get a stream, a flow of events out of an async iterator and consume that using an async iterator with an async iterator. So those are the main things that we are currently focusing on. And at peak, if you want to know more, I ask more questions and so on. So there's a lot of things to do. These are the type of times. This is a nice feature, very, very nice feature of the language. Also with node dates and setting at the end of the year, which means that in more or less six months time, all LTS versions of node would have async iterator support. So essentially will be my best recommendation to pursue streams. Is there any more questions? Okay, so my question is there's, I have an agenda sort of with my question. So I'm really curious. This is like a very valuable session, by the way. This is my first one here. I'm like literally mind blown right now. Like I'm gonna take a photo with like all the people all the keys at some point today. But so thank you for your service, everybody here. But my question is specifically on what it means to, I guess, do you have a sense of how many folks that are running Node.js in the wild are transpiling their server code? Do you guys have like a jet? Like if you have just like a rough wild guesstimate, I'm just curious. It's like transpiling and polyfilling. Like transpiling and polyfilling. Yeah, okay. I'm giving my answer, but you know, if somebody else has theirs. Give it. You have it. So I'm reponsored to this. In my experience, this comes into, goes into two camps. There is the React server-side rendering and View server-side rendering camp. And that they need to transpile. And this is dictated by React and JSX. So that is the main reason why they are transpiling. All the folks that are running Node for microservices and APIs most of the time they're not really, a lot of the time they're not transpiling or they're not transpiling that heavily. But there is a huge amount of people using TypeScript which count as transpiling and not in some cases. Okay. So it depends on the definition of transpiling. Is that the code modifications are not so heavy. So I see a lot of code in my work as I work for a professional services company. So I see a lot of code in the world. And you know, my recommendation, if you're transpiling, do not ship any stuff that's not part of the language. That is like no stage two, no stage one, whatever with your people and stuff, you're going to get badly, badly hit by dogs. As far as this, it's more or less, I would say more than 50% of Node the planet are, I don't know if you have- That was my understanding as well. I just wanted to think about it. Node, Node is just- Yeah. So Lori Voss who is the chief data officer at MPM is giving a talk at JSConf a year this weekend. We'll probably actually answer that exact question. I don't know if you're sticking around for JSConf. Yeah. Yeah. But I searched on Twitter and the last time I could find him with a picture of a slide about just TypeScript, not even transpiling, just TypeScript, 46% of MPM users are using TypeScript. So that's definitely, that would definitely imply that reasonable conclusions that most people are transpiling. Yeah. Thank you so much for your contribution on that. I just, just to clarify my point so that you know what I was actually asking, it was specifically server code. So I'm not so much talking about dependency code because we all know that's just the open source model, right? Like 90% you didn't write 10% in here, right? So, or less for front end code. And so, you know, I was just trying to understand how much of server application code is transpiled because if you're saying to me today as at least 50% likely more, that you as implementers of these native modules, right? That means that your code is actually not getting tested, right? Because when you're using a transpiler polyfill, you're not using the native implementation, which is really problematic, right? So this is a pitch statement to my five o'clock session to be there, be square. But I just wanted to like put that out there. And I'm curious, is that a concern for you guys as like live authors? Is it something you need to think about? Yeah, sorry if I misunderstood that. No, no, no, thank you. I just wanted to ask a question I'd really throw away in the search. Trust me, you are like, I guess, yeah, just library authors, I'm sorry. Authors of these native modules, are you concerned that your run times, while like let's say they're getting run, like magic number, they get run 100 times a day, there's 100 node applications a day, 50 of those are not actually, you know, using the native implementation of your set feature, they're using some happy work around, right? So is that a concern for you or not? That's my concern. So that would include like, for example, someone using Bluebird rather than native promises or something like that, or? Correct. Yeah, you know, I've never actually been very concerned about that, have you? I add, unfortunately, unfortunately, yes, I add. I have seen probably the worst bit of node code possible in some cases. So, and every time I see this is the possible, I reach the bottom that is a wall around the file. And I want to add one thing and I'm so glad that other library authors have done. Just to be clear, a polyfill has limitations, it also has bugs. There's known bugs and unknown bugs, right? Your implementation of code also has bugs, known bugs and unknown bugs. But I'm just saying that like that matrix of bugs, known bugs, bugs, known bugs, is like a crazy matrix that's like kind of quickly marked like this system a little bit too, but it is. There's a question here. What was the next question I was gonna ask? I'm concerned about this more from a, just a drawing point of view, because source maps aren't built into C8 or Node.js itself. And as a result, you know, more than 50% of the web being deployed or more than 50% of the code being deployed on the web. Stack traces aren't correct. We don't necessarily even know, like from a debugging point of view, we have lost a lot of context. This is a topic I'm really pressing in. It would be great to like, as a platform understand that 50% of the code out in the wild is probably being transpiled and have more tooling in towards that. Something I'm super interested in talking. Just before we take another question, I was just gonna add that, would you all sit on the chat that they can answer questions on VM and modules? I added that there in case people are taking photos to DM people later or whatever you intend to do with it. But we were not kitted out to actually put Zoom audio into the speakers. So unfortunately, we can't let them actually have their questions live. Sorry, Joe. Again, thanks for setting this up. It's really great to have people to ask all these hard questions. I was going to ask about releases and especially the old ones. I've seen numbers for over the 10, over the 12 download numbers. It's really boring. I was wondering if there are measures that are being taken in account for encouraging, moving on from those really old versions. Even version four is being used a lot as well. Sorry if it's really hard to answer. Not sure who's gonna answer this question. Raise your hand. I was asking sorry. Yeah, no worries. I was asking about active measurements to encourage people to use topics like over 10, over 12. Maybe like sleep, something, but yeah. Like the download numbers are in the thousands for both and maybe the same thing for version four. I want to add, there are a couple of things there. We don't have the data to say that actual people are using them. There is zero 10 and even zero eight are still in some .travis.yaml file out there. So it might just be both. And we don't have any data to distinguish between those. I don't know if you know about this but express still test down to zero 10 or zero eight or something. Just so that there is a proper package and all this press system still runs on zero 10. It's pretty amazing by the way. And so yeah, that's, you know. So the other bit is we kind of issue new releases for those things. So that's the second point. We can't be concise because we're over time. Yeah, the short story is just how much we can do other than go out to search one here and we're seeing many helpful move off those things. Even if we did add some kind of switch to say, you know, just gonna sort of song down. People just won't use those versions and still fix down, right? And it'll actually encourage them not to upgrade to those. So just, you know, strongly encourage people to upgrade. Thank you. This wraps our AMA. Thank you for all the remote folks who tuned in. I think folks on the list are pretty much open to answering questions if you hit them up later. We're moving on to the OpenJS Foundation CPC session. And before we get the kickoff, I just wanted to thank owner. I can't see where they are now. Who's been helping us cut the audio to the remote folks. So that's super. And thank you for all the people who have been answering questions and volunteering to do so. Really appreciate that. Thank you.