 Hi everyone, welcome to the live taping of JSParty. Thank you, thank you. How many people here have listened to JSParty before? At least one episode. Wow, a lot of people. Thank you so much for listening. That's awesome. So you know that we usually have some intro music, so you can't break with tradition there. Start that up. You don't normally see this, but we use Zoom and we dance in the videos to each other, but we don't release that. I don't know how to dance. Awesome, that gets us in the mood. Thank you so much for coming to JSParty live at Node.js Interactive. Now another thing that we end up doing on a lot of episodes is either some kind of rap or poem or haiku to kick off and get started. And so I have one of those prepared, written by yours truly. But first, I got my slides mixed up. I want to talk about JSParty a little bit for those of you who may not have heard of it. JSParty is a weekly podcast about JavaScript and the web, and we talk a lot about a lot of different things. And we have a great diverse cast list where not all of us are on every episode, so that keeps things fresh. But we have Suze Hinton for us, Abuka DJ, Kevin Ball, Emma Wettigand, Divya Sassidar, and Michael Rogers, Christopher Hiller, who is at this conference. And if you're in this room, that means that you're not at his talk right now. So thank you for supporting me. And then Jared Santo and myself, Nick Nisi. Thank you so much. So back to that limerick that I have at Node.js Interactive. The talks are all quite attractive. From transpilation dread to awesome worker threads, this conf is surely impactive. Thank you. So yeah, in this, we're going to get started and we're just going to talk to some of the speakers that you've heard throughout today and yesterday. And we're going to talk about their talks and dig a little deeper, ask some other questions and really get more out of them and more out of their content. And yeah, so let's just go ahead and kick it off. The first speaker, I got my slides mixed up again. Sorry. I forgot that I have this in here. If you haven't listened to us before, these are some recent episodes that we have right now. If you're here, you're also not listening to this live, but we are currently live interviewing Amad Nasri, the CTO of NPM. That episode will be released next week. But other episodes that we've had include a discussion on ES modules, modernizing Etsy's code base with React, mentorship with Khalil Lashel. You're probably using Streams with Matteo Kalina, who also gave a talk at this conference. And then we also have some fun episodes like we should rebrand JavaScript, yep or nope, and that's a debate where you're assigned your thoughts on that and then you have to defend those thoughts, whether we should rebrand JavaScript in this case. All right, so let's get started and let's interview our first guest, and that is Vladimir Deterkin. You want to come on up? Hi. Hi. Thanks for joining me. So you gave a talk yesterday, and it was called, it was on Node.js loader hooks. Yeah, that's right. Tell me about that. So Node.js loader hook is an experimental API in Node.js. It's linked with ES6 modules, so it's the future and everyone loves that, I guess. Basically, it's an API that enables you to hook any module that is loaded, and then you can do whatever you want, from logging to actually creating virtual module and load them, because it doesn't hook existing modules that are loaded, it hooks modules that are asked to be loaded. So let's say you are loading a module that is not present in your Node modules, you could dynamically create it virtually from a hook. It's just mad science. That is really cool, and this is actually something that I hadn't heard about until seeing your talk yesterday. So why were loader hooks created? What's the problem that they're trying to solve? So that's a great question. There are a few reasons why you won't want to know which modules are loaded. I was not part of the working group with this API, so I can't give a definitive answer on why they created it. What I can say that, so at screen, I do instrumentation of Node.js processes for security, meaning that I need to know which modules are loaded because I need to inject security algorithms in these modules as they are loaded. And that's a similar issue that APMs have as New Relic, Elastic, or Dynatrice. So all these vendors, we need to know which modules are loaded because we need to know what we need to instrument. Usually, historically, we use a very ugly patch by monkey patching some private method in Node, which technically is not private anymore because half of the ecosystem relies on that anymore. But I see TSC members looking at me with anger, so they created a proper API for us to do that without breaking everything. Very cool. So it's to understand what's in the cache. So even before what's in the cache, it's when the modules are loaded, you have the chance, the opportunity to intercept that and even rewrite the modules. So in the talk yesterday, I had three examples. One of them was actually rewriting types, loading TypeScript modules. So if you create a loader hook that transpiles TypeScript to JavaScript, you could virtually tell Node, hey, this is how to do with TypeScript. And it would not run TypeScript natively because nobody does that. But it would run TypeScript transparently, meaning that you would not have any single file of JavaScript in your code except the module. And Node will know how to do TypeScript because you would have teach it how to do that. After the talk, someone told me about having a YAML loader because there's a lot of things you can do in YAML that you can't do in JSON, but that are still possible in JavaScript objects. So the idea would be like, hey, I want to import YAML modules transparently without having to read the file and transpire that. I want my developers to just import YAML's module, and that's pretty much what this API can do. Interesting. So do you see that as being something that developers use in their actual production apps? For that example, could that be... I know that it's experimental now, but is the end goal to be a really stable API that you can use to do things like that? So it will be used in production, at least for APMs because eventually it will be the only way to intercept load in modules. So that's definitely a business need for APMs. Regarding transformations, yes. I mean, the TypeScript transformation I would recommend having a build step. But if you want to load other things like YAML, this is a great example. I don't see any reason why you would not use that in production with this table. The only potentially huge thing in the future is that how do you compose multiple loader hooks? And we know that the JavaScript ecosystem is really strong on having entropy and diverse things in the ecosystem. So I hope there will be a sooner standard for people to play along and not to step in each other's feet when loading modules. Very cool. So you can only use one loader at a time? Is that right? I think so, yeah. Okay. Another example that you gave in your talk yesterday was mocking or stubbing modules by changing them and you were using a proxy. Do you want to describe that a little bit for our listeners? Yeah. It was a pretty complex use case. So the idea is that as I told, you can rewrite the modules dynamically as they are loaded. So in my example, which is a proof of concept, please don't use that. Even if it's on GitHub, so I guess it's public domain. In this example, what I do is that when a module is loaded, I check everything that is exported because it's just an array of string with the name of the things that are exported. And I replace all of the exports by your proxy, which is a native object in JavaScript that enables you to trap everything that happens on an object. So I replace each of these exports by a proxy and I expose the proxy handler, the definition of how the proxy behave to the end user. Meaning that when you load the module that has been transformed, you also have access to a set of objects that enables you to change the behavior of all of the exports. So, of course, to make it smarter, we need to bet recursivity on that to change deeper fears. But the first thing is good enough. Basically, instead of changing your code to make it easier to test, you would just need to load your code and then in your test file you will be able to mock by changing the proxies and the behavior of the code but only for your test file, not for the whole world. Yeah, that's really cool. So you would not necessarily have to write code that injects the dependencies. Yeah. You could just have it through the loader inject the handler for the proxy and then change things on the fly and change them back afterwards. Exactly. Over the last few years, I have seen so many people reinventing the wheel for dependency injection in Node. I won't troll any annotation, heavy framework and that. But that's the thing, stop reinventing the wheel and creating 1,000 of projects where we can have one single, at least cleaner way of doing that that does not require your code to have unstandard module loading because that's the main issue I have with all of this alternative dependency injection thing is that they reinvent the way you load modules meaning that I'm still a vendor, I still do Node.js instrumentation and if you do weird things, that gives me more work to instrument it and I'm lazy. The best developers are. So one thing that we rely on right now, I write TypeScript full time and I use ES module like choice. I use ES module like syntax in TypeScript and then I rely on tools like just and I haven't looked at what just is actually doing but it has the ability to mock your dependencies like this which I assume is just relying on the fact that it's just an underlying common JS module that's actually being run and being able to do that. So do you see loader hooks as like the solution for those types of problems in the future when theoretically we're all just writing straight ESM? Definitely. Also in my understanding loader hooks will also be available for common JS closing parentheses. But actually there will be no other solution to hook into things that are imported through ES6 modules so people will have to go with that and sometimes it's good to have a unique way of doing that but this API has been done cleanly historically you can only mock modules synchronously this API is based on async functions meaning that you can do async treatment when you mock the modules. It should be incredibly powerful and I think my talk is just an opening and a few possibilities you can do with that but I'm really excited to see what people will build around that. Yeah, definitely. So that really opens up things like what was one example? Fetching something, right? Yeah, I think it's the equivalent of here plug and play as a go module loading system where you don't have the package.json disclaimer I love package.json I just love to do weird stuff on my free time. So basically you would be loading modules from a URL because it's just plain text at the end of the day or bytes and if you have stream bytes that no JS knows how to instrument it whether it's JavaScript or WebAssembly you just need to find a way to get it locally on your machine and to give that to no JS to build a module for. So yeah, one of my examples was instantiating a gist without downloading it before starting the process letting node download the gist for me and instantiate it. This opens the door to a lot of and a thousand of security concerns that's why it was just one example and I think if you want to go that way you need to have a couple people full time to figure out the security impact of such things. We don't need to worry about that. I'm sure it'll be fine. Security is not a big deal, is it? Someone just saying no. So that's really cool and I see this API as being like one of those APIs I'm thinking back to Miles' keynote yesterday where he was talking about, I think he called it the existential dread of transpilation or something along those lines where we are using transpilation and we're using common JS and all of this and there are a lot of things that common JS can do or can be abused to do that ES modules really can't because of the way that they're statically analyzed and things like that and this seems like one of those APIs that is allowing us to have not really have to take away a lot of features when we go to that so we can do things like that get in the middle of how modules are actually loaded and change that in really interesting ways. Another way is like the I think it was called module attributes where you might be able to load JSON in the future with ES modules, for example. Yeah, actually someone came to me after the talking and asked, hey, would it be possible to ES6 import common JS module with a loader hook? And that's actually doable because you would... there's a method in no name createRequires that enables you to create a custom require function that you can use in ES6 modules to load common JS modules so you could definitely build a loader that would do that. Actually in my TypeScript example to import a TypeScript transpiler I had to do that because it's not exposed as an ES6 module, so I had to load it as a common JS module. And that's... Yeah, if you want to create backward compatibility with common JS through a loader, you can. The entropy of weird things that will be available with this API is limitless and that's one of the things I love with the Node.js and JavaScript that large ecosystem is that it's just an infinite state machine where you just give a few rules it's an AI, it's a collective AI you give a few rules and the pool of developers around the world will hack around it until everything is hacked around. Absolutely. So, kind of as a closing question what's one thing that you want developers to take away about loader hooks? That's a good question. I'm really unprepared for that one. I guess the thing is Node.js can be turned into a universal runtime and I could make a pun on saying it's a gral of runtime referencing gral.vm which is an amazing product in development by Oracle which aims at running all languages over the JVM and we have a chance of doing something similar in Node.js because through loader hooks you can load anything and when I mean anything it's anything that Node.js can understand eventually including loading REST code and having it compiled to WebAssembly on the fly or even CRC++ code as long as it can run either in WebAssembly or in JavaScript you can run that and V8 and as long as you can do that you can do a loader hook to transparently get that into V8 so yeah, hack around the language of the world to Node.js so we can finally achieve world domination as it was the plan all along. That's great. Well, thank you so much, Vlad for talking to us. Thank you. All right. How's everyone feeling? I'm very excited for our next guest to come on and that is Mary and Via. Would you please come up to the stage? Let's have a round of applause for her. I think it's on. Welcome, Mary. Thank you. So tell us a little bit about your talk and the title of that talk was Transforming a Country Through Code. So yeah, today we are sharing about our work in PinaresDep. PinaresDep is a nonprofit organization and NGO from Colombia and in my talk I was talking about or sharing about when you think about Colombia first you don't know how to pronunciate it if you are from out of Colombia so it's Colombia, not Colombia as the university is very different. I admit that I was taking notes and I totally spelled it wrong and then you corrected me so I appreciate that. Yeah, it's different. So and the other thing is I know we have a really strong story about war and about internal guerrilla problem and I know you saw Narcos of course and Netflix is a pretty prime time show but that's not the reality in our country so we create a small group with fighters in 2015 and we start from there and right now we are circa 1200 women young women who are learning how to code. That's just crazy. So that's great. Can you tell us what is a pioneer am I saying that right? Can you tell us what that looks like what you do with that and what is the typical story of a young woman who goes through that program what does it look like? So we realize that it becomes like a study group of enthusiastic girls. I was one of them. But then we realize that 85% of our group that starts small was from they have lower income so they don't kind of afford to take a ticket to go to our innovation hub in Medellin. So the first successful story was Milady. Milady is a typical girl from the communas, communas is the poorest area of our city and she goes to university to get a job because she doesn't know how to work in a qualified work on the street. So she goes to Pinedas and we only created a meetup but this meetup really changed her life. So it was like 10 meetups that year. She could get her first job in tech field. So it was awesome. What kind of technology does the group focus on? So our core was note because we have a really cool mentors that are here in this conference and they are really great from the tech culture in Colombia because they create the first conference in our country which was JSConf, I was organizer in 2018 and 2017 and I'm very close to this community and actually most of them I know note is backend but most of them has really strong roots from in the JavaScript language and most of them are front-end developers but we have really cool girls and really smart girls doing note as well. So that's the way that you get into the program you kind of start off with no skills in programming at all. Zero. Actually most of them don't have a computer so we have a special room in Ruta Ene with really cool PC laptops and they get in touch with the technology through this space and they get in their homes. So that's from zero actually. And this is and so they go from that and about how long does it is the typical program? It's one year but I actually specified that I mean they learn how to search and how to search in Stack Overflow and yes and how to they self-learn in other space like libraries or small study groups and they can share one laptop for five young women but with mentorship because we also have a mentorship program through a year they can't get the job. Very cool. So this was started in 2015 you said in your city right and that's Medellin and it has expanded beyond that right? Yes right now we are in three main cities in Colombia Cali, Barranquilla and Medellin actually right now is a really big we have a really big tech hub right there but in other cities that's no story because they don't have too many companies in there or actually it's not trending to be there or don't have the spaces to round the meetups so we are really helping through Medellin to reach other areas from Colombia that they don't have too many opportunities. What kind of support do you get from the local businesses or from the city or from the country like what kind of support is there for you? Actually the three things we need someone to want to share their knowledge if you are if you feel like you can share with us Pioneras you can write us or follow us in our social media and you are able to share with one Pionera in Medellin Colombia or in other three cities that we are right now. The other thing is venue because we need a place to round these meetups and the co-working so innovation hubs will be very open to us cities like Cali and other ones that we like to open is Cartagena for example and Cartagena was really difficult to find a place but perhaps through universities we can reach them for 2020 and the third thing will be food because we like to share with them some snacks but will be really low we need cookies and coffee that's it sorry Yeah absolutely how does is there any other like type of funding that happens for that? Yeah we create a shop if I like we put these t-shirts that cost like $15 it's in pesos that people see that and it's like $35k and they say like oh my god it's so much but the TRM the conversion will be $15 it could be less but you can buy a t-shirt and perhaps we could send it out of Colombia right now but you are supporting our cause and has the group has it expanded outside of Colombia? Yes actually I know in Latin America there are a few groups about girls in coding areas but there are some places like Bolivia Peru also have but I mean Bolivia, Ecuador and they have a space for something like Pioneras but we are creating a change one community at a time so open a meet up in other cities because we have third two departments like Burroughs or something like that in demographical political divisions but we have really a jungle or really poor areas that they don't have developed like the big cities that we are right now so we like to expand to the rural areas and perhaps create a bigger impact in our country first Very interesting Yeah it's so cool and it's such a great thing you truly are transforming a country just as your title states which is really cool how can we help with that? Just only write us an email and give us ideas how you could support us and I'm sure we can figure it out Yeah and is that through like helping with teaching and things like that? We need people to share knowledge and right now I know you are we always know something to share but perhaps you are always apprentice in life but this really need knowledge and time time is the most valuable currency that you already have so if you have the time to share one hour with these women young women in Colombia will be great perhaps we need to improve our English skills because they are really smart but to pass the barrier of the language and we need to practice our skills so perhaps we will be tech English skills I don't know Very good and was there anything that you didn't mention in your talk that you want to get out to everyone? What do you mean? Like I don't know any kind of message or anything I mean your talk was really great I was just asking I think the message that I like to share with you with all of you is please help us help us to transform our country help us with your time and with your knowledge because I know here will be the brightest mind to share about note and about JavaScript world so we need to change these women, young women lives, thank you love it, love what you're doing thank you so much for doing that and thank you for talking with us Alright next up we have Chris Wilcox and Jason Ekovich so please welcome them to the stage alright welcome so Chris you are an engineer at Google and your talk yesterday was oh no the robots are taking over I think Yeah so I gave a talk about how we use automation for the Google Cloud client libraries to try to make our job a little bit easier and a little less re-puged gardening Yeah absolutely and in your talk you mentioned using a robot and so Jason you're the maintainer of a robot welcome to the show thank you, thank you for having me so one thing that I thought was pretty cool in your talk was you gave an example of or you gave a list of the five levels of automation and I just wanted to go over those real quick and then talk about them and they were automating portions of your workflow is step one automating the discovery and work but under supervision would be step two letting the robot do the work for you but with supervision would be step three and then doing the work unsupervised and pulling out the fallback support would be step four and then step five is your boss is step five and so you mentioned that we probably will never get to step five and we wouldn't want to which is probably a good thing yeah it's probably not that surprising as some of the works in technology but I watched some science fiction and generally that goes poorly anyone that's seen how knows that when we take technology to that point it gets me inspired and causes us more pain than good for sure what could possibly go wrong so tell us about a problem that you're using robots to solve so we use robots for a lot of different things on Google Cloud the example I used in the talk was about being able to run CIA for things that are initiated by non contributors so many people in the community use dependency monitors so things like renovate and those aren't first class members of a repository they don't have right access but we don't really want to have developers having to screen repositories and for most developers this probably isn't a huge problem but at Google we have hundreds of repositories and so having to go over each and every one just to initiate CIA to build and test the dependency update is very painful so we can save literally hundreds of hours of developer time by using bots to do that work and we do it bots also for release management publishing docs monitoring and we even take it not quite to step five but we have some robots that do bot monitoring so for instance our publishing flow to NPM is multi step the first step is that we build CI and we tag things on the GitHub side but there's a step after that that will publish to NPM and for some reason in between those two it doesn't get all the way to the end the bot comes through it notices and it opens a bug for us that sort of ties back into the talk it's good to scope your bots so while it's monitoring it is a very simple task the worst thing it can do is open bugs against a repo we have some safeguards that doesn't try to open a lot of bugs but yeah yeah what could go wrong so you have bots watching the bots yeah and the last bot in the chain is never really monitored which is sort of problematic but knock on wood nothing terrible has happened yet so so these bots that you're building to watch it and tag issues and such you're using probot for that so Jason why don't you tell us a little bit about probot sure I can do that before I do though I have this really funny story that I want to share about bots watching bots there was this tweet thread there was this tweet where an open source project had a pull request that was CI was run by a bot it was then approved by a different bot it was then deployed by a different bot and then a different bot came along and said hey congratulations everybody great job so you know who watches the bots except when they're kind of doing their own thing it's kind of dangerous yeah it was this sort of weird thing where bots were interacting with each other yeah it was awesome and terrifying so probot is the sort of tagline on the website is it's a framework for building github apps so github apps are a way to integrate with github probot is very web hook focused so you know something happens on github your probot app will be set up to receive a web hook and then it has all kinds of like helper apis to say okay this happened on github now here's how we're going to handle it so you know very common example would be somebody pushes code we want to run CI most CI providers will sort of have that built in but if you wanted to build that through probot that's how you would sort of frame it so that sounds very similar to how actions work from my understanding they're responding to actions on a repository that might be essentially hooks so probot does predate actions and so when actions was coming along the other probot maintainers and I we sort of looked at it and said wow this is awesome you know this is great this covers so many pain points that probot has so like deploying your so probot is just a framework it's a Node.js framework under the hood it's running an express server so where do you deploy that but with github actions all of a sudden github runs your workflow automation tools which is really exciting nice that's really cool so yeah it's that was one takeaway that I took from your talk is that probot really or the apps that you create the bots that you create are really just node apps and then you can put them under version control to keep them there it sounds like you could do pretty much the same thing with github actions where they're just under version control in your repository itself yeah yeah I mean there are definitely a few things that like if I were to build a workflow automation tool sometimes I'll use github actions sometimes I'll use probot I'd say that you know for things like persistence or long running tasks if you know you care if the server suddenly dies probot's probably a better option if you think to yourself hey I'm going to run this app in like a lambda function actions might be a really really great place to do that nice so tell me are there things that actions solve that probot doesn't or vice versa yeah so one of the I have two things that I want to mention so I think the most exciting one to me in github actions you can really really easily clone down the repository that the action is taking actions against so you'll push some code and you want to run some kind of test coverage tool or something in probot you'd have to download a whole get object thing which in note isn't very fun to do but in actions you can add one line to a yaml file and suddenly you have all that code available to you which is really exciting that sort of enables a whole slew of new things and then another one and this is something that in the probot community we saw as being like a really important addition that we wanted to see in the platform itself is some concept of secrets so in a repository you want to configure some api tokens to deal with other things like maybe you're pushing to send grid or some other service right there's not really a built in way but with actions you can include these things called secrets and you can include those in your action runs and it sort of just works super well nice that's really cool so there's a lot that you can do with either probot or actions Chris what is the most complex thing that you have a bot doing so typically you don't want bots to be complex yeah so complex bots fail in complex ways and that tends to get sort of hairy I would say the neatest thing we probably do though not that it's that complex we find that especially so many repositories issues go stale either it gets assigned to a developer and that developer gets overburdened or goes on leave or it's just not their area of expertise they were misassigned so it just falls to the bottom of their stack and stuff they do and if we detect that we'll pick someone else on the team to randomly assign it to an issued juggler essentially and that tends to stop things from just getting stale and makes it look like we're a little more active and we can be a little more responsive to customers the actually most complex thing we do is probably publishing just because there's a lot of steps individually it's all very simple but we have to publish docs and as well as the samples that sell the samples for the repository in the package the MPM package we use typescript so that needs to be transpiled none of it's too complicated but all the pieces do need to fall together and for that example what level of automation would you say that falls under at this point it's up to I would say it's three or four so the levels are a bit fluid if you notice from the talk they're based on something to do in automotive engineering to sort of driverless car leveling and understanding so it's just really a way to frame sort of risk and reward honestly more than anything else but it's about a level probably a three maybe a four at this point the thing that made the change for us is we go as far now to auto detect if we auto publish and so as commits come into the main branch we can detect that there are new changes and we use a thing called conventional commits so at the front of every commit is a label be that chore, fix, breaking that will also sort of detect is it a patch or a major release we can auto generate change logs and from that really the only thing you do as a developer on the team at this point is merge the PR and everything else is done for you so we still control whether or not we published NPM but the rest is fully automated nice so that must save a lot of time it's really nice I don't want to go back yeah so going back to that example that shuffles commits or sorry shuffles issues that are getting stale I haven't looked at the APIs closely but like is there an action for that or sorry a web hook for that or like are you doing it like is it proactively searching for that and running like on a cron job or something how is that being being kicked off so we have we have cron bots so yeah that's how this is done and that's something we extended ourselves with using a thing called cloud controller that Google Cloud can provide us so we kick off that action but already ProBot uses a thing called Octakit that gives you access to a ton of different GitHub events and there's far more than I would have originally thought it's definitely something worth checking out but you can trigger on all sorts of things and it's very fine-grained down to pull request open to synchronization comments labels so you can get pretty exact answers to when you want to take some sort of action and run some script yeah very cool so what does the feature look like for you would you still continue to use ProBot would you use actions would you have a mix so we started doing this before actions is around which is why we made the choices we did we didn't have a chance to evaluate actions I think if we started today we would definitely consider actions but there are a few constraints so actions don't deal very well with long running tasks so that can be problematic you also it's also hard if we ever wanted to scale up so we used a thing called Google Cloud Functions which ultimately takes a small bit of Node.js or a few other languages in our case it's Node and executes it for us on an event hook it starts up a service when we need it and shuts it down so it costs us very little money we could adapt that into docker containers fairly straightforward like and then maybe eventually we need a Kubernetes cluster who knows we've also extended to have some security measures so we store none of the secrets in the functions themselves they're all stored in the key management service also a thing that Google Cloud provides and allows us to be a little more secure and a little more confident it's also a lot easier for us to rotate our secrets so for a convenient standpoint for us so Jason what is the future of ProBot look like will it have some kind of maybe integration with actions or some way of sharing like the capabilities between the two how does that look that's a great question first of all who knows we can sort of do our best guess but what I'd love to see is some of the features of actions sort of opened up to the ecosystem I was talking about that's specific to actions but I'd love to see it come to the general ecosystem so the ProBot can use it and enable it for integrators that way that itself is a big problem but otherwise I still see them as separate I still see them as two separate ways to build integrations I personally have written a ton of GitHub actions I think they're wonderful I've written a ton of ProBot apps every time I go and build something new which one am I going to choose today there are some ways to use a ProBot app within actions there's a repository in the ProBot org on github.com it's called I want to say actions adapter and the premise is you wrap your ProBot app in this little node adapter thing you run it in actions so you can kind of have the best of both worlds like I said so you can make some adjustments and throw it into a github action and call it a day it really gives you that flexibility to really choose anything whereas github actions are more streamlined for github they're running on github servers you're running yours on Google Cloud Functions so you can have way more flexibility and make those more fine grained decisions with ProBot I'd liken it to running your own server versus like throwing something on Heroku or you know it's just about control very cool another cool thing that you showed off in your talk was a way to proxy the webhooks locally so that you can access those and test your ProBot locally do you want to talk about that a little bit so I can talk about it I think Jason is kind of an expert on sme.io I actually kind of want to hear you talk about it because I'm curious I never get to hear people describe it to me sure so sme.io is a service that Jason is also pretty involved in that proxies in our case Jason payloads from github to a local host so it's a rather simple service Jason was telling me I think it's 100s of lines of code it's at that level and what it allows us to do are bots so we don't need to get Google Cloud involved at all we don't need to get actions involved or anything we can run the express server locally we can make a test repository on github and that will send the event to sme.io and they give you a slug so it ends up being a random character string and that will forward to localhost 3000 and allow us to test it locally you can debug things them that way you can play around a bit it doesn't have to be too serious then the other really nice thing that sme.io does is it lets you see the requests that have been made which I find very useful when it comes time to write integration tests, unit tests I can look at a real Jason payload and I can capture that and use it again later that I think is something I don't see a lot of examples of but it's probably for me personally the most useful thing about sme yeah that's a nice way to get at that is very beneficial we have an issue open in the server repo that's like ad screenshots of the Jason payload view because it's super super helpful very cool well is there anything else that you want to tell us about probot that we haven't heard about today I'd actually like to expand on how sme works on the inside because there's like this sort of really interesting API that I'd never heard about before event source api yeah and it's kind of like a I'm probably going to get this wrong but it's like a unidirectional web socket implementation kind of and sme works by having this one server that's constantly running and then multiple clients connect to it as let's say event source clients is the right term but it's sort of this like we have this I don't know sme server that then shares payloads as they come in to all the different clients listening so you know we built this specifically for probot apps to like receive webhook payloads locally but you know I've played around with like really weird implementations of it you know using it to capture payloads from you know all over the place to not just a probot app but you know some running servers yeah it's a weird thing that we've seen people use for completely different intentions than we ever thought about yeah and it's kind of cool seeing it yeah that's really cool so that is that's like you said there's a single server that's a server that you're running and then the clients would be like like a server that Chris is running for yeah exactly so there's like two separate parts to it there's sme.io and then there's the sme client there's a CLI or you can use it programmatically so like there's some built-in support in probot but you can also just use the CLI directly cool very cool well thank you so much for coming on and talking about probot and for sharing your wisdom on robots and making me feel a little safer that we're not going to get to level 5 automation anytime soon and that you're actively not doing that so appreciate that alright we have one more talk that we're going to do and that is on Node.js worker thread so I'd like to introduce Rich Trot and Anna Henningson if you'd please come up let's give them a round of applause I was going to have some I was going to have some stadium walkout music but I was going to have some stadium walkout music but yeah so welcome why don't you introduce yourselves okay so I'm Anna I work for Neoform which is an Irish Node.js consulting company and I work on Node.js my job is working on bleeding edge features for Node.js and other Node.js things so workers is one thing that I pushed quite a bit yeah and workers so I'm learning about the Node team internals and what working group does workers work under or does it so it doesn't have its own working group it is what we call a strategic initiative and Rich can probably talk a lot more about what that exactly is than I do but basically there's somebody on the technical steering committee of Node.js who is in charge of pushing that forward like who reports what prowess has been made and so on cool and Rich you want to introduce yourself yeah I'm Rich I work at the UCSF University of California in San Francisco library where my Node.js work is tolerated but it's not my primary responsibility yeah so yeah most of the work that happens inside Node.js well I don't know depends how you quantify it but a lot of the work that happens isn't in a strategic initiative isn't in a working group there's no roadmap because the features that get implemented and the bugs that get fixed or whatever the people who are contributing and collaborating want to take their time to fix and implement and really really wanted worker threads yeah very cool so let's take a step back actually for a moment and what even are worker threads well well there are threads built on the worker model that is used in browsers like for a long long time browsers had this worker class like a service worker no no web workers oh sorry web workers not service workers I'm sorry I mean like service workers are also a thing like a while ago somebody came up to me and asked me like how do workers and service workers relate to each other and like I'm sorry it's like completely different it's like Java and JavaScript I totally had my terminology mixed up I met web workers web workers are way for websites to offload CPU intensive work to a different thread communicate with it send JavaScript data back and forth worker threads essentially brings that to Node.js yeah so you can like spawn multiple threads like they're kind of like separate Node.js processes except they are like in the same process and they can share data very efficiently especially if it's like typed array data that is you know structured very easily serializable and so yeah yeah I don't know if you saw my talk yesterday but I totally evaded the subject of explaining what they were by saying they're kind of like web workers they have some differences and I pointed out one difference I think and then they're kind of like threads in other programming languages but not really and you know and then I just quickly moved on rather than actually try to clarify what kind of gray area they actually fall into just go look at the documentation and start using them don't worry about it don't worry about it don't worry about it just use the thing so yeah not to get into semantics much about it because I will get all of this wrong but like when I think about like you have your main thread in like a JavaScript app or a Node app and then like every time you do something asynchronous that's kind of is that considered like a thread or a process so that will offload that could offload to a thread in the in the pool that Node maintains under the hood but no it's not going to be a separate thread that you manage through this like definitely not in a way that should be visible through the API you shouldn't think of it as a separate thread unless like you should think of it more in terms of like the event loop yeah exactly yeah cool so I know that like web workers have some constraints in like they for example can't access the DOM or things like that are there similar constraints obviously not to the DOM but are there are there constraints to worker threads oh yeah well I mean like for the most part most Node.js libraries are available in built-in modules like you can use require it will work the same way as it does on the main thread there are some restrictions that are around managing per process date like for example you can't change the process title or change the current working directory because you know we were thinking like okay this is something that you know it affects the entire process right and so like that should ideally only happen on the main thread you know but generally no there's like no restrictions on what workers can do and that's one of the like very important ways in which they are different from web workers like one or two things you know whatever small number things in process dot or OS dot I think that they can't access but for the most part yeah you can do it in the main thread you can do it in the worker thread worker threads can spawn worker threads spawn additional worker threads worker threads all the way down so what state are worker threads at right now like are they something that I can use in production today yes next question so in note 10 they are still considered experimental in note 12 they are stable and like there haven't been any like significant changes to the API over the last half a year maybe a year or so so they have effectively been stable for a while like the only few adjustments that we did before making it officially stable I have some very weird edge cases around timing and like the message transfer thing that you know so in order to make it conform to the web platform test for that you would never run into that as a regular node developer so yeah they have been stable for a while in a way nice cool so I can use them as long as I'm in note 12 I can use them today I guess you could also use them on note 10 but like you know a little warning sign there in what can't you do with threads are there like with the other like experimental features are there like I'm specifically thinking you said you can require in there I assume yes modules would also work within threads what can't you do with them well I mean like one thing is like workers are not there to replace like the existing multi-process model that most or like at least a lot of Node.js applications use like simple because like you know it kind of makes things easier when you have different processes in some ways like you can attach debuggers to them individually with note workers you know it's kind of tricky it works but it's tricky and like Chrome DevTools doesn't have support for that yet and you know if like there's a hard crash for some reason like the bug in Node or something it won't tear the whole application down just the single process that was spawned by the parent you know they yeah they aren't there to replace child processes yeah that said I mean that's kind of I mean yeah every like every use case is different I guess I was I've been surprised a few times where you know in mostly making you know example applications to sort of demonstrate worker threats but I've been surprised a few times in both directions like oh this should really you know worker threat should have really performed a lot better here and they didn't or the other way around where wow that really made that take no time at all it's um yeah so I mean you know the API for worker threads is pretty small the surface area it's not a sprawling API it's not a complicated API it's the type of thing you can learn pretty quickly and and then I mean I just I find it's just a lot of fun to experiment with so I mean I would my recommendations go hog wild and just benchmark everything and see what happens and use them where they make sense and don't use them where they don't make sense and one thing Anna warns against in her blog post and is absolutely true is that you know you know you can you're not going to you're not going to get any benefit for IO heavy stuff with worker threats because you know it already does a lot with the asynchronous calls and like fs.read or whatever um to fs.open um you know so like you know trying to like you know spawn worker threads to deal with you know massively concurrent IO is probably not going to get you anything not going to help it all right so you know so that's something you can just not bother experimenting with unless you like seeing negative results which some of us do so um so what what um is there like a specific use case that that worker threads were created to be a solution for um yeah that is like uh cpe intensive work that um ideally requires a lot of communication between the different threads because that is usually going to be faster than communicating with child processes depending on how your data is structured uh it's also a lot more flexible like you can send circular data or like generally things that don't fit into jason over over to threads I think what rich did in his talk is a very good example like for those who didn't uh see it uh you want to explain or I mean um yeah so um so if you might recall the six degrees of kevin bacon game um it was kind of like that before before music and so it was you know like uh you have you know two musicians and so you spawn two worker threads and have one thread try to find everybody with that musician and the other worker thread do it for the other musician and sends it back to the main thread and the main thread just you know uh tells the worker threads to stop once once they've like once they have a musician in common which basically means you have a connection um but until that happens both worker threads are just you know running running running you know gathering lists of people um did that cover the part you want to cover yeah right so like like cpe intensive work right that you know you want to offload from the main thread yeah because those are like really those queries get to be really expensive at least the way I did them um so for me like the exciting use cases are that like you know um so where I work there's a lot of people who you do or are interested in doing data science stuff and they all want to use python which is a great language for that um javascript has been a terrible language for that uh but you know between through between worker threads and also getting big int um I you know like I mean we're not we're not we're not there yet but it's getting you know it's getting pretty good for things like um uh for machine learning and natural language processing all sort of stuff um the other thing I think about for worker threads is um all those javascript packages that do uh graphic graphics manipulation you know like here's a npm package that will you know you know create thumbnails for you or you know I think I think of graphics processing and you know that sort of thing is a cpu intensive thing and you know why not fire off you know for you know why not get a pool of four or eight worker threads or how many processes how many you know make sense and just you know launch them and have them do all of them at once and you know bask in the glory of of finishing your your job faster yeah yeah like image processing is a great example because you know it's also cpu intensive work and like image data is usually represented in some way as like you and eight array or you know array of bytes so like you can transfer or share them with zero cost with workers and yeah just like yeah I mean that's something we haven't mentioned yet which is that uh you know you know unlike with a cluster module where you have individual processes or or anything where you have uh workers you can you can you know share memory if it you know in certain situations like you know uh if you know the data isn't a very if you know what size it is and if it's a very predictable format that you can uh that you can put it in um you can share the memory or you can even transfer the memory so that like you know I you know if you're the worker thread I give you the uh shared array buffer and I can't use it anymore but you can and then you know and so which really really cool nice yeah I was just gonna ask if it was like shared array buffer if that's what you're using as the medium to transport between yeah share share you'll like shared array buffers are shared by default and and array buffers can be transferred like you can I don't know I'm sure yeah yeah very cool so um how can people get started with worker threats or where would you point them to to get started with both using them uh and or uh contributing to them well let me tell you Nick so if you uh if you go to I don't know if this will still be true for like you know too much longer but if you go to palacefamily steakhouse.com there will be a list of links from my talk and the very first link is a blog post that Anna wrote uh using worker threads to solve Sudoku puzzles nice and uh and then there's a bunch of other things in there about um uh uh you know my you know a couple blog posts from me and the documentation and some sample code and uh a few other things um as far as contributing to worker threads my recommendation is know a lot about when this is this is kind of a joke but not really know a lot about windows and debugging windows and C plus plus and then clone the node repository and fix test-worker-prof because that one has real has been pretty stubborn that yeah that's going to be quite a journey if you want to do it but we're here to help well and by we I mean Anna because you know yeah she knows the implementation I don't so for for like you know usually when you want to contribute you want to like have some visible result of that like having like I think the way that they are right now workers as a feature are kind of complete you know we can add stuff and there's things that I want to work on like um startup performance or um there's this like really cool thing that the javascript engine provides which is called snapshotting it's like you can basically take a a node instance and take a snapshot of that and then later I said which is kind of going to give you like a very fast startup if you like have boilerplate code that you run at the start of a thread or something like that that would be really cool to have it's going to be a ton of work if somebody's interested that's great but yeah you're going to have to read up a lot of v8 apis with very poor documentation yeah you ready I know not at all but that sounds amazing thank you for the very specific we know exactly where we struggle yeah it's been a terrible test for like forever well cool on a rich thank you so much for chatting with me today about worker threads and thank you to all of the guests that we had on jsparty definitely check out the podcast at changelog.com slash jsparty I think that qr code should work I tested it though and it didn't so I mean just because the screen's not bright enough but yeah definitely changelog.com slash jsparty go there check it out and we record every Thursday at noon central one eastern time so check us out join the party thank you