 Thanks everybody for coming. We're here to talk about deprecations in Node.js and how we're dealing with that in the future. So currently we have a deprecation system, some rules for that. The typical Node.js deprecation works like this. So you have a feature that is documented ideally or maybe not. If it's not documented, you probably skip the first step because the first step is to mark it as deprecated in the documentation. We typically do that as December major change. Sometimes December minor, we feel that makes more sense. Like usually, that's marked deprecated in the documentation. At some point, we may want to transition one step further. We produce a runtime deprecation, so prints of wiring on standard error when the feature is used. On the final stage, which we may also transition to at some point is actual full removal of the feature or break it or whatever applies in that case. Do you want to say it? Okay. So that's how it currently works. People have very different opinions about what kinds of things we should deprecate and what situations they apply to. We also have, because we didn't quite feel like the system was fully sufficient to... So we introduced the dash dash bending deprecation flag to Node, which prints runtime warnings for things that are usually only documentation or deprecated. It's like you opt into seeing more deprecation warnings. Yeah. So why did I actually want to session? Because, as Ana already said, we are often struggling with actually reaching the persons we want to reach. Then they are often not efficient in a way that even if they are printed out to someone, people would not follow up on implementing the change that we recommend. We cannot afford or we may not break the ecosystem all the time. So there is a lot of things that we have to take into account while making deprecations. Right now, it's more than subpar. We should definitely try to improve the experience on a lot of areas. This is hopefully what we can figure out in this session. Some people might profit from deprecations more than others. That's also something that we have to figure out. In a way, we want to also get input from every one of you to know what your experience with deprecations has been so far. Bad, good, everything together to actually see what we can do. Because right now, when you use Node, we have no data whatsoever. We don't know how often a feature is used. We don't know how it is used. If it is used, we have some tooling for detecting that, like jisminit and Sygium, which we use to, like Sygium is a tool for anyone who doesn't know it, and that runs other Node modules and test suites against a specific Node.js module. And then we check if the test suite still passes with the Node module version. And jisminit is a tool to pretty much run a regular expression over whatever modules you want to choose. We have separate dumps from the whole ecosystem with all Node modules out there. And we have them sorted either by the top list pretty much. So all modules that have a specific requirement of likes, they would be preferable. So we explicitly try to not break any of those. And then we can just run it against a whole system to see if a specific API is used in a wild amount. But of course, we do not know anything about private code. And we also do not know anything. And like, then when you get the output from jisminit, it's also that you have to first look through every entry if it actually applies or not. Sometimes there is transpiled code. Sometimes it's a different API, which is just has a similar name as the Node core API. And we have to first of all filter out all false positives. And this is not easy. Like it's really, really tough situation. Sometimes we have deprecations, which make a lot of sense in a way for reason like the API is broken. But other is we just do it because we have two versions of it with the same name. And well, they should only be one way. But we still break the ecosystem by doing it. And what should we do? What should we focus on being like an input session in this case also? Maybe someone of you already has some further input? Hey, I have some questions about something I'm not sure if it's unique. Are you asking about specific this? Just go ahead and ask. Yeah. So, in Node 12, we recently deprecated others for headers on outgoing requests. And I use that quite heavily. So I got it because I ran my test suite and then I saw, oh, there was deprecation warning. And I haven't looked into this. Maybe I should know this already. Is there a way I would actually get my test suite to just fail when that deprecation. Yes, yes. There's a deprecation program that will instead of me hitting an error, we'll just run it. Yeah. Great. I will make a, I'll probably open an issue or something or PR and work on getting the features that I'm actually using on the way to go into a proper public API later. But there is also a deprecation that we don't have been used by a lot. But like if there's something that we're still trying to like see if it's actually possible to repeat, we'll put it behind the deprecation. So if I run my test suite with that on, and the great thing as well, I would actually be notified before maybe my, it's only used like for two or three deprecations, mainly the buffer one. And you can turn it on with environment variables as well, so you can put it in your, in your like batch RC and then every, everything you run, including NPM will just skew warnings at you, which is really great if you're looking for like first commit skew. There are two other command line options that are useful. One is trace deprecations. The one we, when the warning has been in, I'll show you where. I'll show you where. And in the other one, I doubt that many people are using it at all, but there's also a redirect warning. It allows you, whenever a warning is made, it will output them to a file. Right? So if you're in a situation and you have a lot of output going to your console, the deprecation warning can be varied from that output. This gives you a file that they're all selected in that, you know, you can, tiny correction in case anybody's taking notes, James, I think you misspoke to say dash, dash trace deprecations dash, dash trace. But I don't think anybody was taking notes. On the kind of making an informed decision, are you tracking kind of like, from like a analytics perspective of like, how many people are accessing the Dockson with respect to those features? Because like, there's a lot of features of you put a deprecation flag on the Docks. I'm not going to find that because I don't open the Docks for that. I mean, there's also the all, the Docks. Like, there's a view of all. Yeah, I mean, which is, we could theoretically, partially do that. We just, we moved the tracking off. I mean, yeah, we, like, we had Google analytics tracking for the Docks. And it got removed because like, basically, nobody had access to the data. Just, yeah. But it's a good idea to like, maybe figure out if like, so like, if somebody clicks on the, on the link in the Dock for, for a deprecated feature, like outgoing headers, maybe we could do that. All right. Yeah, I see. There's a, does it make sense to add a opt-in flag to gather usage data? I think, like, that would like, it could be possible that we could like, do some sort of telemetry that from this, we could like, for example, if you use this flag, it will send some request to a server controlled by us. But like, people may opt in to use it in their CI pipelines, like what people would do with coverage reports. They would, you know, after they run the CI test, they would use a module that provided by some coverage service providers and send requests to the servers and collect coverage to that. I want, I'm wondering whether we could do something similar to that and collect the data. But like, not in production because people would like that. I like that idea in general because it would actually allow to collaborate pretty much with tools like New York City. And then it could be an opt-in flag there. Yeah, like, if people are comfortable with sending coverage to coverage services, then they may be also comfortable with sending usage data to us. But first of all, we probably have to implement something like that in the core. And that's the question, how do we actually do that? Like, how, what could we even track in API usage right now? In CI CD environments, we wouldn't even necessarily need to add anything to be able to do this. We already output the applications. So, in those environments, make this implement, it's rule to, we'll be at the output and extract those things or use like the redirect warnings to get those in the file and then process. Yes, that would be possible for the actual applications, but what I would actually also like to have them because, but now we do duplications in the black, honestly speaking. Yeah, it's like, well, this API could be, it's weird, you know, it's not really doing what we want to have. Or we have a duplicated and then someone might come up with a PR. And then we officially have the three steps of documentation and duplication, runtime, duplication and then removal, which is also like, we have very, like, when does it happen? Does anyone really do it? Do we, later on, say, this was a bad idea? Some of the way debug works, we can add these points, but we can selectively insert them in parts of the code that we want to watch. Yeah, yeah, at least capture some kind of data. Yes, exactly. So, so that's what was one of the points that I wanted to get to, to actually have a way of tracking that. So that's why we first of all have to implement something in core. Maybe like we could like implement some kind of protocol instead of having to parse something, we implement the kind of protocol that like these instruction points in core can report somehow. And like users can choose how they want these reports going out of their environment. For example, if they run the CI test in, for example, a environment in their company with firewalls, they may like collect these statistics and they like field it now, is to ask if they want to. Why don't we just do a command line argument that takes out, you know, and then it will worry that it happens to the file. This thing is on, but these strings are, they're unable to send people, right? They've got to be a little file and the user can do it already long. There's no automatic telemetry being sent up to a server. They have to explicitly opt in by sending them. Are you going to opt in? So, yeah, and then I'm just trying to answer, it's like, all right, so the thing about opt-in data is that it's very different from your opt-in. We know what's kind of about that. And the problem with that is we have to do an update and we can have any of these things again, you know, pipeline tool and everything, and then I want to see if this is a requirement to run that, whatever it is. It's opt-in. That's why I think that, like, actually involving NYC and CA, then maybe some testing frameworks is a really good idea because, like, a lot of people use them and if they do opt out, like, they do that for us, the reporting of the telemetry data, then we don't have to worry as much about that. No, no, but it would be, you're just opt-in for those environments that are running the tests, they're running them for you, if they're just always coming out on. I don't think they are in this room. I mean, like, Ben is downstairs, Can we add it? So, where was this? Unique one and two. Can we add it to, say, the inspector protocol and, like, the part of the coverage so that somehow in the coverage report generated by the inspector protocol, there would be, you know, some metadata about this kind of usage. Just one nice, intermediate thing. Does anyone take notes? Because this is actually, like, the session is meant to have an input, so we should take notes. Yeah? Yeah, but they haven't done it until now, so. All right. Has anyone started or? Oh, really? Yeah, maybe we should have, like, a link that we could share it to so people could add different notes there. Can you just send the link to me? James, he's writing code. He's not writing code. Oh, you're writing code. Okay, so, in this case, Ben, thank you very much. Let's just do something. Sure. We also just want to look at some of the data. The name of the documentation is to show up on the console. And later on, it's not, it's not perhaps, you know, some great stuff, but it looks like it gives a sense of people to see stuff in the monitor. So, who is? Um, I asked our project manager to look at me on the slide. Um, he was the manager for this, uh, while he was a telus in the enterprise project feedback thing. So, around phone and phone metrics, I'd like to get your thoughts on why that's beneficial and how that's actually the end of it, because things I've heard from you are very opposite from what are the things I've heard here so far. So, like, it'd be nice to get your phone and phone for Node and understand the usage of that. So, like, Node processes and how people are using Node to improve that. Okay. Um, yeah, the main theme that I'm coming from is we see a lot of data as the community. We see a lot of data in the open source space that help perform a lot of ideas and decisions, but from my, you know, estimate, that's likely less than 10% or 20% of actual Node code that's being written in the world. And most of that is in private enterprise businesses of teams of hundreds of developers building thousands of applications. So, how do we surface that, that information from private businesses and companies who are very security focused, very privacy focused, not to indicate their business models or their customer information or anything about what they're doing, but actually feed that information back to the project to give it the inside development. So, and you just came right now, we were already talking about, uh, implementing something in tools like NYC. And when you run the coverage report, you could pipe it automatically to the fire every end of occasion warning. And the deprecation warning itself, currently, even like what we should do probably in that case, it's like a often also to have the tracing for what code line, but we could remove every information that is actually really about the application you're running. Um, and I mean, do we need to trace in this case? Probably not. Yeah, we need it because you need, we want to have a unique thing. Probably. So, I don't know if that makes sense, but from, like, purely from the performance perspective, I think the simplest, maybe best, actually would just increase coverage for every deprecation warning we've got. And for every single one, a different one, but that just doesn't really provide any data that actually tells you about what they're doing. So, I think, coming from the perspective of, you know, having been a leader and at the end of the question, there is any level of data that, you know, the tech leads, the developer managers, the architecture team, whatever can get from these types of tools without having to be scrubbed will be useful. And then any, then maybe a second tier of that data or a second level that data that can be scrubbed to give back to the community and give them that option of doing that because even more helpful for them like that. But even give them the data and give them the opportunity to understand what's happening in their ecosystem of, you know, thousands of applications. Because we did, we did, we did a lot of that, I tell us. We actually scraped a lot of data off GitHub, we skipped off a bit off NPM. We did a lot of internal like charts and automations and like all sort of information for us to understand what's going on in our ecosystem because I had hundreds and hundreds and thousands of applications. There's no way for me to keep track of it. So I needed the data to help you. Derek, a question for me to you. What do you think about our deprecations in general? Do you think they're useful? Do you think they make sense? And did you struggle with them? Are they stupid? Are they good? You know, where? Well, deprecation is a general topic versus deprecations of APIs in general. Like, what deprecations did you run into and what was, what experience in general did you have with them? Well, even in this way, the biggest struggle was one of the biggest struggles and in a team of hundreds and thousands of applications is, first of all, knowing which apps are still be using what version of note that will monitor internal APIs increasing and then actually trying to even get off that version. That in itself is a big struggle. So just jumping over that hurdle, then getting to the point like, oh, are we using variety of APIs? Are we using the most performant ones? Are we using the most recent versions of them? That's a second tier problem. First tier problem is even getting the structure from a business and funding and capability perspective of just getting over all the versions of things. That was a struggle. So people were able to do that. The next year becomes the deprecation challenge of is it worth our time, right? Having done a lot of upfront work of deprecating notes, don't point to saying, like, 10 or whatever was right. Then that delta of effort can actually translate to, you know, hundreds of thousands of dollars for a business and you have to make that conscious choice. Well, you know what, just use the old, you know, less performant API because the delta of value is too small or too small for the effort involved. And was it like for, can you tell of a specific API that was really bad or do you think also some were positive? And another question, like, when do you consider an API broken? The most relevant examples I can remember, it wasn't really deprecation per se, but just being able to scan our more systems and finding bottlenecks of asynchronous calls versus asynchronous calls versus asynchronous calls, causing some performance issues in general. That just developed an effort of just getting over. I can recall anything specific to the deprecation other than buffer. That was the one that comes to mind. But that wasn't bad. That was mostly actually in modules, those dependencies. So we just updated from the version. And then that became a different tier of a problem, like bumping it to the dependency tree version and then being up to date. So it was a consequence, not a direct reason. Right. We will mount that one in the round, because... Exactly. You know what I'm saying? Yeah, because updating dependencies is a nightmare. So I'm just curious of, like, when we're gathering that data, does it make sense to only focus on things that were already decided to be deprecated or actually get a better overview of what's the usage of all the different APIs, since I know, because if you're already starting to gather that data, that way, if you realize that something that you might be eyeing at to deprecate actually has a way bigger usage than you expected, it might be worth rather putting the effort in to figure out how to solve that in a different way than deprecation, rather than first marking on as deprecation, deprecated and then realizing that, like, oh, oops, like, that's actually a massively used feature. Right. Yeah, I think especially in the term of, like, so I think Rubin's last question here about when an API is broken, that was kind of like hinting at the fact that I think an API is broken when we're at that deprecated. Like, at the point where we do that, where we actually change the runtime to produce deprecation warning, it's already broken. You can't use this in CLI applications and it's just, it's unexpected output from your program. That's not what you want. And so, like, I think looking at the numbers at that point is definitely far too late. And I think, like, gathering that data about what features are used and what frequency you might do a thing in general, it could also, like, help us find things where we can do performance improvements, for example, because it's used a lot, you know, the other way around. So it just touches the context of collecting usage information. And I don't know, like, it adds to the question, like, you know, what's actually running in your app and production versus what's just used in the code base? Because I did that, I scanned our code bases and tried to get some information out of it and it led to full spawns where we're like, oh, that's right, I always use X number of times, but it's actually never called in the production. Yeah, but I guess, like, tracking that we would want to implement is actually to have the calls and, like, to count the number of times. Yeah, and we're using NYC, like, the exact, is that what you want to do with NYC? Right, yeah, yeah, this is, I mean, we borrowed that production. This is not about the numbers, definitely, because you could hit something very different in the test as opposed to your real application. That's something, if anyone, like, that's a very hard question, you know, like, how do we solve this? That's a question to everyone in the room. So, if you had to step to black data, wouldn't that mean that you would delay the production even further, because you would add one more cycle to your re-cycle to collect the data, and only, like, even if you did that, you would only catch some people who are not hurry, and people who hold out and are still not able to, for example. I mean, I think they won't be represented well. I mean, that's a, like, you have to, if you want to start collecting data, you have to start at some point, otherwise you will never get there, and, like, that approach is perpetually true until you implement it, and then five years down the line, then you can, like, always have that data source. Sure. So, what I'm saying is that, if you want to collect data, you have to collect it for all APIs from the start, and not only the ones you can, but you can send it to them. Yes. Yes. Yeah. Well, I believe we have a lot of APIs which we could track. For example, like, we have a lot of APIs that are officially private, but effectively all public. That means every underscored entity in Node4, and that's a lot. Do you consider them to be APIs? Yeah. Like, if people monkey-patch them, like, crazy, and if we... Officially, it's not supported. Effectively, we break it. We break, like, yeah. Yeah. You know, JavaScript built-ins, that if you monkey-patch every... that's sort of to be, like, law. And the... Is that a problem? It's not really the monkey-patching that's real problem. It's just the usage of it. Like, all the streams, underbar, readable state.finition, and things like that. Yeah. The thing that you can monkey-patch something, that's the AI. Right? Yeah. I mean, I mean, you know... Yeah. Yeah, when we eliminate the ability to monkey-patch, we usually consider that semi-organization. Right, yeah. I'm sure this was already discussed, but have you considered just looking at the public open-source code and doing static analysis of that and seeing what that contains? The guy-call-wise? That's a good check. Yeah. How do we do this now? Yeah, it's... There's something that Thomas and I have a lot of coverage of into, and the limit is the rest we can do. Okay, but I guess my question is, is it worth it looking into doing the telemetry stuff? Like, what is the other thing good enough? He needs more data. So, one point I want to add on this discussion is, right? So, what I was going to mention, I was in the government position where the node upgrade is taken by different nodes. Say, for example, when they move from fixed-to-aid, they do a lot of analysis work, and they keep missing a point in the descent made across the ocean. Say, for example, we have around here on microservices, I mean, all in nodes. All right? None of the team can upgrade themselves. The main team need to upgrade, and they do the work. So, taking that ideology into node can work for, say, for example, in Java, let's say you use an API, you get a Deplication 1, but the moment you update the Java for the next version, since Java is compile-time, it crashes immediately. Say, for example, can't we get all the data, like the removed APIs in a particular version, when the node starts or when the node is getting installed, can we do the installation? See, in this particular version, this all is going to break because this is like front-end. So, that's actually working. Yeah, that's what I was saying earlier. How do you scan a code and actually know what's in the path of execution versus testing in the code, but not front-end? But if that data is available, maybe the main team could reach the developers of each microservices, and then, obviously, I mean, the developers need to take care of, okay, the scanning of the code base and say that, okay, I've moved. I don't know, but that works. Do you want to go? I think part of that thing I've considered is that we'll worry too much about the usage data and we'll have to figure out a process for actually feeling something in that because we're worried about breaking the ecosystem, but that creates basically this unpretentiously problem. So, nobody remembers the code. It's different than I think this. You don't need to remember the code. It's never good. It's never good forever. So, it just feeds itself, right? So, I think part of it is kind of exposing the data so that the end consumer understands the things that are kind of victory and have a better visibility to it. Also, it gives us some insight into it, obviously, but I don't think we should be making our decision necessarily based on that data. We should use that data as a tool and we need to have a process. We need to link to that, even if it's been breaking the ecosystem. Otherwise, it's just going to get worse as we... So, are you suggesting that basically all the applications should follow the cycle from documentation to removal to the end? I'm saying we should have all the applications. Okay, I'm definitely not agreeing with that. I don't see what's wrong with keeping things deprecated. Why? You can create... Because when you do that, it creates this unpretentability. It's hard for the consumer to understand which of these applications are going to be serious and which ones are just like, if you have a clear process and everybody knows in two versions that thing is going to be killed, that gives you the incentive to keep your code updated and that gives you the incentive to understand okay, I have to do this because I'm not going to be able to keep up with the notifications because I don't want to do this now. This creates also debt because teams just don't do it and then they have six versions of releases and it's like, I'll do as much and despite the label, even if you're not going to approve it, which is you send us a clear signal that we will not pick spots here, we will not add your feature request, we will not do anything, and if you use it, you're totally on it. It will start erasing their hands Yeah, it starts with a list of things now. So please just raise your hand and then group them back on as well. Go ahead. Yum, you were next. Right. What you said is very valid, but I think it's missing part of the story and that is when it comes to the application process there's two sides that have responsibilities in the process for example, if we really want to have a strong application process which means it's fabricated so next major mode it's moved 100%. We cannot fabricate anything from the Facebook API because if there isn't a few tabs that you can actually use in every single script we support at LTS that replaces it like in Node 8 right now they wouldn't have to be an alternative guide once we're ahead of us otherwise we would not be able to fabricate it. If we would get to that point where we really have every single thing that gets fabricated I think we could get to a we can actually remove it but I definitely know that for me at least the background for the application was that it was not necessarily possible to use the background from in all the versions immediately so I couldn't fix my code and it was just broken in the morning so that's just I think about a very clear application process we have to start on the Node side and then we can put cash on the ecosystem to actually close I mean that's part of the process I guess I just want to you got to um hey so um okay and I totally in this case and I think we also have here somewhere and then there might be the point are duplications useful at all because should we sometimes just say okay let's break the API is that what we want to do instead for some APIs because if people don't follow up on duplications if it's just noise so to speak on the terminal and just annoy the people is it really like this is just a question I don't say this is right or wrong but it's something I want to get some feedback on as well because having a strict policy about that makes it difficult depending on what API we use because sometimes it's a rarely used API and if we duplicate that it probably doesn't hurt the ecosystem so bad but then we have other APIs they use so often and that removing it would break it always why would it always break because a lot of modules are not maintained and they are all used in production which is a big problem and probably something we should discuss at some point how can we overcome that problem it's not only about updating the node version itself it's also how can we make sure to have up-to-date Node.js modules but don't use these old APIs yeah am I being correct alright Jeremiah I'm a notion that if you back support APIs that are supported back long enough that you can duplicate and remove the old API and then use the new API I would just like to remind everyone that sys still exists have we forgotten about that can you repeat it what does still exist sys oh yeah it does exist isn't so used I have no clue yes there are modules that still use it alright okay Thomas two things there is the application warning it's even useful if I could so I found that because of the application warning so and now it's on my radar and I actually opened an issue in our repo that we need to make sure that we kind of figure out what to do about that that's the only reason why that is on the radar we technically also have a test-to-test node in IK so if it was removed we would probably have discovered that those tests failed but it's nice that was the first thing the second thing is all the modules that already exist out there which also and one thing that actually would be really cool is something similar to Winkeeper where it would open an issue saying oh you're using this API in your code that is going to be differentiated soon or we would consider differentiating or something depending on the level that you can figure it for that would be really really awesome if we would consider building that as part of the project I will work on that that's awesome I like it I will leave that put that in a minute assuming with this application we could probably just use it for bottom babble you could use the existing ecosystem on your point you found the underscore headers because of the deprecation warning and now you're updating it the problem is that initially when I encountered my first deprecation warning as a user I should fix that because it's going to be removed but by also not removing stuff as a user you just get trained what if I just mute those it's not going to do anything the stuff is going to be around which means why would you have the motivation to remove stuff because in other projects if I see something as deprecated I know I need to get my stuff together to actually fix that but it will disappear at one point and then I can't blame anyone because I've been warned but if Node keeps on setting precedents of we mark something as deprecated but we keep it because we can't break the ecosystem like two big world changes one is I'm going to start timing not that anybody is going over but we are running out of time and two I'm going to leave ahead to people who have not yet had a chance to talk I'm reminded from the package maintenance thing I would like only to say that our task is to maintain those modules that are wide high usage but not maintained for example so in our agenda there is also a tool for transpiling for example we build a demo that converts a new buffer to buffer from and open a PR for example so we are very about it Good Eman? Regarding the green keepers I know Nikita has done a lot of manual on Bridgework and for the buffer I constructed the presentation and if it goes like that I wouldn't leave that to a tool which is frustrating you have to interact a lot with the maintainers themselves it would be good for an often solution and regarding transpilation I mean for those cases I would prefer to just not duplicate or only documentate and keep the alias or whatever because if you can transpile you can also write an alias or a wrapper or something and not have anybody break code at all that's ideal for me Ruben 90 seconds go ahead direct follow up on that one so I personally think documentation deprecations are normally completely useless because we already like last week on a conference the main problem with Node.js is docs people don't look at them and if we document that it's no one will change anything they would even implement sometimes the documents part so if we did not document an API before and now have it in the docs where we say hey this API is deprecated then some people might actually use it instead of not having used it before because they looked at the docs so it's like the opposite of what we actually want to achieve and thus I really I think we should not have documentation deprecations in 90% of the cases most of the time they are not useful they do something else and we really think they could they would you have not talked yet you have not got so we actually have the numbers for docs you can check that on another one on that it's like some of I explained that the API has been deprecated and probably the complete page on the doc to invite people and just expand that on the fleet or something like that just to get the attention to see first that is what you want to use it okay we're entering lightning round we have five minutes so one minute of peace, Thomas, Anatoly, James and anybody else raise your hand because I didn't see it skip Anatoly skip James on the doc deprecations I think we do easily start working for making notes for any deprecations in every case we have some way of doing it wait after going deprecation what if you actually remove it from the docs you can still have the code in it but remove it from the docs this idea was good yeah do we have any kind of physical JSON that Anatoly can consume about deprecation information because like right now if you want to look at what is being deprecated you probably need to look at deprecation.md which is not really something you can consume there's JSON docs might be in there somehow but yeah those are not the last time I checked those are not really something that you can use it's like we have a lot of weird tags around right well Jan and Jeremiah actually really on the JSON question are we working with the Type 15 that maintains note type definitions to add deprecations there if somebody is using the decoding code does this want more people in these days they get those warnings there I don't believe we are so I think the note type doesn't definitely need to type so anybody can just take a full look at that but that's a good idea that's good to talk about putting it into note form is there a problem with just always warning by the way unconditionally I mean not unconditionally like obviously you have the flag and stuff but just when we deprecate something we add like the warning by the way with that such kind of deprecations that's basically Jan's suggestion visibly not pending so I guess this comes to types of deprecations we have different types and that's the problem because sometimes we have human-facing ones which are consumed for example in the rebel you're using the rebel you're the actual end user and there is something in the rebel deprecated and now you get the notification right away that's the user that should receive that notification but then we have the buffer one and you know it's like with the buffer who's actually the person who reaches it could reach anyone and it could be like a deep down dependency where you don't have any influence on the maintainer you can change it and for whatever reason you're also bound to using that module company policy then we have APIs that are used by modules used by applications different frequencies of usages so all these applications I think we should start thinking not in a generic term for deprecations but more about for what APIs go to runtime deprecation which would be your thing but other APIs actually pretty bad as such so we have to determine while deprecating things what type is it and by whom is it used and after detecting that we are then able to identify what to do with that right way that would be my recommendation so James is very eager to say something and Joey you think that's something to say and I have an important lunch announcement so we're going to go James, Joey, important lunch announcement I'll make it really quick the whole buffer because one maintainer alone has something like over 800 modules most of which use buffers in some way it took us months to kind of go through and update some of these modules but we still haven't got through so we have to be very careful about that burden the thing with buffer is that it was a footgum but it's totally possible to use it in a safe way and it was in most cases but Joey could you have something important lunch announcement is as follows lunch will be in the restaurant area downstairs so not where we had the breakfast but where if you are staying in the hotel you go in and you sign your room number that's where lunch will be so if you come out here make a left to go down the stairs can we just follow you? sure, yes times up but this sounds really really right for some hallway track because I have like a million things I want to say thank you I hope you all found it also valuable and I would also love to continue the general discussion and I would also like to get more feedback from companies here for example Netflix didn't really say anything there is that yeah IBM would say a lot of companies just say and tell us there is this not where practice like they say so I would if I would be willing to do that as a strategic initiative yeah we actually oh hi where you have like this line for the say hey I'm also tracking some statistics that we are anonymous this would also be interesting I would say that you already have a lot of stuff in this case not actually the the the others about saying do you want to opt into you can see because we have a UI we can show the use of what we are practicing to actually see what do you think about the idea I would I think that would be a good way for us to also give back to the community I will have to think about how to sell it internally but because in Kibana it's still unknown so it makes total sense hang on the same so that's a different thing what you're asking about is when users install the APM agents in the door applications are we currently looking into if we can get to them from the APM agents we don't do that currently if we're currently in the system but I mean for example when node corp would add the flag to create the file with all the data and then you could just send it once a day or so and to note anonymous when they opt into that with an option in the configuration that's overhead for a runtime program it is that's what I think it would not be that would have to be option 10 performance implications what about you can actually declare like a time in which it would be tracked you could define I want to have this tracking for one day and then it would be deactivated automatically if you're doing that already if you're building that into for example my APM agents you might as well just fill in of course because if this is something the user needs to set up for the agent you might as well just set it up for his node process yes the problem is that that is mainly the problem so it would be something on top the feature itself is implemented in port and to actually opt into it it's like but how do they opt into if the agent asks them to do this as part of what to coverage yeah yeah I think NYC is a better approach I think so what do you think about the time base like two because I mean in this case you could have a position like you where I'm not able to sell this it's going to be part of like someone that they're done for a single day you know you could have like tracked it for and they would just do it imagine I'm like I run a team in the press company and they have to deploy this like I get nothing from it other than being nice and there is a risk that like I will be interrupted in the middle of the night because there is a performance issue it sounds like a hard sell but the coverage sounds like very normal I think NYC is a better cover all over I'm wondering if I can like push it to silent like randomly like if you install a spy it will stop like and even like think about this like from a code copy we use code copy and currently I mean we log in and we can see like a nice with colors and graph and like a nice graph like where do we have this coverage and others it's super nice that kind of coverage is also about like which API we use I guess like so it would actually be nice just for me as a user to have the same information and so it actually makes sense for coming like code copy to build that in because me as a user I would actually like that so how can we sell the actual implementation it's a problem so when you have to build the node API which is basically just logging whatever this is I mean I imagine you could even have basically node debug and C++ type of statements but I think I think like similar to how MPM is constantly tweeting like download numbers and okay this is the top 10 most downloaded things and the people are using this for no version of that no version and stuff that is not directly accessible but they have access to right so they are aggregating a lot of data that is normally not accessible and then they are tweeting about it so CodeCop could potentially they are collecting all this data anyway they could aggregate it and they could send it to the node foundation open case foundation anonymously it doesn't have to say which project it's using and just say there is 10% of our users running on our system are using this API and I'm not even sure they would have to so depending on their privacy policy but I'm not even sure they would have to have right because they already have access to it they already have that right they could just do it they could probably just want to query on their system and see all this information so I think I'm not sure who CodeCop is I assume that they are not NodeJS only people I think they are probably able to do Ruby and stuff but if you could get CodeCop or similar there are other companies CodeCop is well used within the node ecosystem that would be an interesting discussion I think and then there is no update in SOS labs and stuff like that I wonder how much of this is actually needed for Node versus the consumer so like CodeCop presents CodeCop which also presents you with information I think for users it's also more useful for Node because actually I still don't think that us acting on this data because like we have plenty of data about buffer usage and stuff and the number of people switching over that's correct but there is a couple of things that would be great for example you have so many underscores and APIs now we have private stuff we could start moving things to actually reduce the burden for us as maintenance to think about these things and in this case if you see that being not used in the wild then you would be able to just change it that's what it would look like somebody said that this was just continuing to be a problem going forward and it's just going to grow I don't think so because in the future all of you think we're going to make sure we're going to use symbols or private since I was not good enough it gets worse from the moment sorry not from the perspective of Sterling and I'm saying there is always going to be deprecations there's always going to be things that are going to change things that we decide are a bad idea at some point I can see as deciding next to a bad idea because we have a huge micro sense there's two types of APIs it's a really confusing API so all the stuff that are underscores are supposed to never be used which are used there's not going to be more those we're not doing anything if you look at what frontend is doing what frontend families are doing and they have this problem times 100 because they drink stuff a lot more often and if you look at some of the worst offenders like Angular they have pretty decent tooling that will actually like migrate your project from API A to API B and it's easier for them because for example if we create like a Babel transform that would be awesome but if someone is for example using TypeScript then like which is or like anything else that won't let Babel parse the syntax then it sort of becomes a problem I guess Babel can parse TypeScript but like the next thing that's not like directly parsable by the tool and the other thing they have is like compatibility packages like React DOM, React DOM Compat and then like if you the problem is dependencies like if you have for example if you want to ever deprecate next stick you can say okay this API is not developed anymore and then you have a million packages that use next stick you want to make next stick available to those packages but not to your code they still recorded by the way should I switch that off or yeah we can just I mean I think that's actually a great idea to make the packages that basically bring this functionality to people to meet it so how do I make sure the problem with that is that like let's say for example it's not something you control there so Bluebird uses a prototype next stick and if there's a million different packages that they're using in Bluebird they're using different versions like Bluebird 3.5 and even though we've been trying to get them to use it less they're using it more like the last year in the summit we added a big warning that says hey don't use Bluebird if you you can use it if you promise us and a bunch of that so in response to that the download numbers went up from like 7 million to 12 million didn't work at all yeah so and a bunch of people are using like say Bluebird 3.4 which is 2 years old and they'll use this next stick and like so we need some way to make that available to users without making it available to users so it was not recorded it's just a recording so I'm not sure how it started maybe it was just a client maybe somebody else can we I think someone else is actually