 So the agenda for this session is first I'd like to set up the context that why I think it was starting with 18 different approach for diagnostics then I would like to show a proposal how we could do that and I think there is some open questions which I hope we can have alignment of these products and feedback and see if we can put them together. During the context set up, I will share some pain points currently what I believe we are facing. Please don't take the master ethics on NodeJS or the diagnostics. I really thank you for the hard work that we are doing. It's more about the focus where we are putting the over. And basically today I think one huge mistake is that we don't have use cases. So I think everyone has in their mind that what does it mean that I don't think the NodeJS process like I can have a memory. My process can crash for different reasons but we are prioritizing those use cases which our company or ourselves are facing. And we don't really have a collection and priorities what actual NodeJS concerns are facing. And we don't really have alignment and priority on what we are planning for what to make it easier for these users. It also means that we are not just not really knowing those use cases but we also don't know how users are trying to handle them. So I think we can all agree that normally it is for example one of the use cases which probably affects many, many NodeJS users. And today you can diagnose many, many different ways. You can use the Chrome key snapshot. You can use the assembly key profiler. You can use G code to get the code down and write scripts. There are MPM libraries to explain the test memory list. There are many, many ways and I am pretty sure I will probably list some of them. It would be also nice not just learning what our users are doing today and is it of that kind of coming out with an idea of a user chart. What would be the really powerful one that we can use in both production environment, development environment which both works on Windows OS X and probably has a UI or API but it can be another feature. So something that would satisfy all the use cases. And also today we are really focusing on specific tools. We have more than 25 tools recognized by the diagnostic group. What we are planning to put in top of this. But we have tools which for one use case we have more than one tool. And this kind of led this situation. There is no real recommended tool for the use cases. Some of you mentioned we have way too many of them. So we are kind of stretching when we are trying to work on those tools and things better. And we are not providing any long term support on that. So today any of the Node.js dynasty tools can break basically for the next version. I mean obviously we have one which should be a stronger. We are going to find stronger than breaking them. But technically there is nothing today that's saying that we shouldn't break this or that. And then they already mentioned that some of the tools only works in some of the environments are better or less better for different use cases. For the Node.js platform tooling we will be trying to see through provider for example. Most of them are powerful tools for slightly different things. One of them only works. So I believe we have some cable problems with diagnostics. And I hope that being more focused and capturing just use cases and user translates we can be more focused around what we are buying tools. So I would like to hear making kind of proposal. One thing is I think this is the zero step in kind of collecting and agreeing what are the stuff of the use cases. So just having a list of what the user is trying to do when we are talking about diagnosed in the Node.js process. And that's why to identify which are the really important ones which we have the energy to go out during the first time. I'm pretty sure there are use cases like we were talking about today embedded systems. I'm 100% sure they have different use cases than one in Node.js and workers and APIs. But probably we should go after two ones that in the least in the first one which affects the most energy assets. I think it would be also beneficial to learn how these users are trying to go after the use cases today. And creating the ideal user journeys to really being able to identify the gap between what we have today and what we're going to achieve in the future. That's really making a natural way of the actual items and the tools. Yes, and the final step, I believe this is something which we are kind of far from today. To kind of thinking about what use case can support between APIs releases. And on purpose I'm not talking about tools here. So I strongly believe that we should kind of stop thinking about like Minutesburg for example has to be supported for all the Node.js releases. We should really think about like for example we want to have a way to diagnose native friends between all the Node.js releases. And meaning of that I believe we should shift the focus tooling to more of the use cases and usage journeys. And basically start to document the best practices, start to see if we know what users are trying to do and what tools we have today. So here I would like to give some opportunity if there is someone who has a pro or a contra opinion about starting to see in use cases and usage journeys. So basically I started to work on kind of a document where I started to call that use cases. In my head. So I'm sure the list is incomplete. So you can find this document in either in the proposal that we created in the industry group, or I will share this presentation later. So basically this document is, I really hope that we can move later to the Node.js diagnostic people and talk to you as a base. Basically it has two bigger sections. One is kind of collecting the use cases. Another one is collecting the current tools that are available. I'm pretty sure I missed a bunch of them. So really it was more what they might have. And let's see the structure of one of the use cases. Like I was mentioned the memory detection. A lot. So basically all of the use cases follow the structure that you have kind of the symptom section. So as a user what you are maybe observing when it's suspicious that you are facing this issue. Some of them has a side effect because it's sometimes not that obvious that why you are saying those symptoms. So that can give you a picture as a user that what we are facing. I believe action can be really useful for example working on a test basis and having users to even figure out what we are facing. Then there is the debugging section and this is a section which is quite non-complete. So it has two sections, the user journey and the current user journey. So it's kind of the user journey is green field. Some of them is not even possible probably. And then there is a little table which is more about what tools they want to recognize in the support for the identity group. Can be used for this use case and what are the existing gaps. And as soon as soon we have the same for all the use cases and the end of the documents we have been able to use. Each tool has kind of little explanation what it is where you can find it and kind of what are the pros and cons when you are using this tool when they are working. So that's the structure that I started to think about. I asked some folks opinion was the structure really my brother and he is thank you for the feedback. But obviously I would be happy to get the feedback from more people and kind of talk to extend those use cases. And that's really when we could start to see what prioritizing. So first I think it would be better to just capture as much information as we can. Is there any question with supplement or a rash leave on collecting permission. No. Okay. Then this is the more collaborative part where I hope we can have a good conversations. So there are some questions if you want to want to start to be more. One is first of all what are the support. So much as can be used in so many environments. And it's not obvious that which one can we have the biggest impact. So I looked to the service and it looks like that most people use this for very. So the proposal is basically start with those use cases. And basically one question is how we can collect more information and how frequently should we use it. Is the annual survey. Does it already have questions of the same detail. So my question was does the annual survey already have questions. If I remember correctly, the annual survey question is basically which environment. I'm just wondering if there's any kind of diagnostic slash analytics feedback that's built into the one time. That's around like automated generating. Is there one I've never heard of. Maybe no, but and it's not like it was ever considered as a virtual obviously it's like often, but I'm just curious if that's the thing. Has it ever been discussed or considered. I don't think I've ever seen like an in-depth discussion, but it has come up with certain times I could be somehow get have a call home. But that very quickly becomes the conversation people are concerned and like, I don't think a lot of people would turn that on. I think even earlier this week, though, people were talking. Yeah, so that was something that came up yesterday when we talked about the implications of like gathering numbers by usage. And so we had some ideas for how we could actually do some kind of not just telemetry, which is like integrating that into testing frameworks, code coverage frameworks, basically have it opt in on the node level, but opt out of those levels. That will, for example, work very well when you're just running code and privacy. And just on that. On the conversation yesterday, I did open PR that as a internal only telemetry function that will admit to trace it along using a new node, node as long as you categorize off by default. Obviously they're tracing additional stuff. There's a lot more noise in there. But all it does is essentially just a trace instance that we can discount how many times that things being used, but it's to start along that direction. Yeah. I was curious if like, the node project is like at a critical level at this point where like, like by not having this like ability to collect this information in the like, we're shooting ourselves in the foot, you know, even though it's something that needs to be done like really responsibly, you know, data collection. Like, I mean, like, I'm like, hashtag, like, stop tracking. Very mysterious. If the conversation is actually worth revisiting. And like, at the very end, like, what's the MVP? Like, what would that look like? And, you know, and, yeah. Do you know of any run times languages that you do that, right? That's sort of the, you know, I have my first question. And I kind of think it's like somebody else successfully. Yeah. So I know that people would call back home, but it hasn't been the run times and it has also been controversial. So it'd be great to get the data. And I know in other languages, we've talked about it as well. It's never happened because people, if that's one of your confidential data, you just don't want that going out. Yeah. And that's why I said, like, it's about MVP. And like, what is like a minimal piece of information that was like, so, like, not an issue, right? And it's like anything like, like, nothing that would tell a pattern, you know, about like what you're doing. It's really just like metrics on like where you're doing it. Maybe how often or something, you know? And so, and to me, like, I'm thinking macOS, right? I mean, also, this is just an open source project. It's used as such a scale that the scale is still large. I think there's a case to be made here that the users of node, like if you really love this project, I really want this data and it's going to be used responsibly. And it's, you know, and there's no patterns. There's no, no inference that can be made here other than to improve. Like to improve the infrastructure and or like help us prioritize our backlogs, you know? I mean, I'm just thinking, I think it's actually not a bad thing, but I don't know that it's actually critical to this particular discussion, right? Like a survey or something might actually get what's needed. And in fact, I think for the best practices, it's best to just choose what we think the best ones are and move forward rather than worrying about, like, did you get the absolute best one? Yeah, I agree with both points that we should start to collect the data and it sounds like for the API identification, this effort where they start it, maybe we should just share the analytics requirements are also included in the effort. And yes, the question is really what is that? Is there anyone who seems that we should block this effort under the survey or what can be run basically what we see and what data we have in the previous survey? Like, would it be interesting to just put out the list that everybody put up and show us the answers when they're most interested in it? I would say that that's not like withholding, but just get another sense. Okay, yeah. Who would be interested or who is using not for API, like that API basically written recently. Okay, that's like finished. Who is using not for rendering, like we are rendering website part of it. Who is using more worker kind of than I'm say worker, I mean more like consuming the message broker, so not real time. It's similar amount. How about front end tooling? Looks like API is the most and all of the rest is pretty much the same amount. I would say many, many, what do you think? The other thing too is like, are these really that different? Like if you have the best practice for first one, is there something about using it for the API? That's a very one question. I think it's more about environment. So for example, content tooling is running mostly on the CI stand. The developers are wondering why API and worker running in server environment and looking from another angle. Usually the worker kind of processes are not that latency sensitive than API, so it's probably more accepted to be more destructive debugging, maybe even start in the process because sure, you just look back at the queue. That's it. But that's a very wide question. Okay, I'm not expecting that we will make a decision, but it sounds like maybe we don't have to block the effort on this, and we can find where basically the use cases are based on when we have real-world cancer data. I'm just wondering, I don't know if you have a plan in your agenda to discuss the issue of breaking the changes and support over versions for diagnostic tools and just curious. And then also potentially some deprecation or pruning of tools and work consolidation of tools. Since you said that there was like 25 plus or whatever. So I have an open question around that, which is more about having a long-term support, but not on specific tools. More on a use case. You can always provide a way to you to, I don't know, debug a memory. I will talk a little bit about that when we arrive there. Okay, then I'm moving forward with this and we arrive to the topic. So we had a lot of conversation on the last couple of diagnostic summits that then we started to work on the support tiers. I think we identified four tiers. One of them is that every comment on master should pass. The second one is every FTS release. I'm not sure completely about tiers, but you can get the idea, kind of having support tiers. And we started to do tools, kind of what we expected the specific tool like, I don't know the Chrome something in which tier should it be. So this is ongoing progress. But now I was thinking about that. Maybe instead of tooling, it's fine if the tooling is changes. I mean, it's not preferred because companies maybe start to feel tired of features and specific tools. So definitely not preferred, but maybe it's a better approach if we say that we will always give you a way to deal with your diagnostic use case. So maybe it changes between two release. Maybe it will be different tools. And let me give some historical reason why I started to be more leaning into that. So Netflix is heavily using call-downs when it was freshest to see variables, blah, blah, blah, blah. And there were lots of pushback on the community on this tool because it's not really what everyone uses. So it's clearly not the highest priority of people who are working with this technology. And we are putting a lot of thoughts on how we could replace our call-down-based processes to kind of not having these hard requirements in our environment. And basically if you start to log more data, if you start to put extra properties on the error, now we have in the node 12, this new, the kind of exceptions loading out of the property. So actually there are other ways how you can deal with the use case. And we are not there yet. And probably it won't be one tool in the future. But maybe by multiple solutions, we can have something which is speciality. So that's the time compression that it would be great to having one tool and support them with the release. But if it's not possible, you can at least make that commitment that you can always deal with a specific use case in node. And we will always give you a documentation of best practice, a recommended tool for that, even if it changes between different releases. So the open question to the group is basically, who believes we should still stick to tools? Please raise your hand if you believe that we should still focus to tools instead of use cases when we are talking about long-term support. I don't see hands. Then let me twist the question a little bit to have some conversation around it. Who believes that we shouldn't focus at the long-term support in use cases? Maybe the question should be, do we have sufficient use cases to cover the tools? When you ask that question, it seems like a yes to me because I'm imagining that there's a use case for my favorite tools so that you won't break my favorite tool. I can only give the answer that that's why I believe we should start to collaborate on that document and make it more official and collect the use cases and identify the tools. I don't really have a better answer today, but I agree with you, until we have big use cases, it's not really specified, probably we can't ensure that those three will be supported because we don't want to have support or something and it just, at least that's my opinion. I'm not so fussy one way or the other, but I think we have to have a testing place where Carl supports what it is. That's the only way you'll sort of recognize soon enough that it's broken to be able to make sure it's not open by the time you go out. So it's whether it's specific tools or specific use cases, you need to make sure that that's tested very early and often enough to be able to get towards what you're hoping for, which is like the next one that goes up with an answer. I also agree on that. And I think the really hard question will be there, that when the test starts to break, then should we fix the tool or should we find an alternative? I think that will be not always an easy question. Yeah, I think that, like you said, it's probably in most cases hopefully easier to fix the tool or things like that, but if you can, switching to something else seems like an easy one. It doesn't seem like the right way to make it flexible enough to say, well, you know, easy as per the, I'm getting the words from like, at least the path that you've just, just in case that you've got, like the plus one, so yeah, that's not working out. Yeah, absolutely agree. So how about this? The test coverage is not always good in us. Actually, we have seen recently an issue when we have the test for Linux products, for example, and many, many crazy reasons it's still broken. Not anyone was broken for a while. And so can we have some kind of process to kind of check on the use case as it's supported the human business? Is it like maybe a manual process or some way? Just to be sure that passing a test is not always means that the user will be able to do it. Yeah, I mean, so what you're going to talk in the notes, right? Why should you? What you're going to talk about is like this really interesting paradigm for me around like testing, like diagnostics testing, testing diagnostics testing, like, you know, and so I think there's like a whole infrastructure that needs to be set up. And I think it's a little bit of a different mental model than what we're thinking about. We think of conformance tests, or unit tests, or integration tests. Like this is a very different kind of course. You know, it's more like a robot test. So Mike, turn around. Another thing to, just another way to consider it. The diagnostic tools can be based on top of the project. Like the ways that we could approach this is not just the use case, but testing the contrast. We're not necessarily testing the tools, but you know, we can verify that if the trace of that log format changes or if the inspector vocal, you know, if any part of the viral amount changes, those are things that we can build to test us for more specifically. And in case of point, we are cutting towards one change created that's the trace of that format output that forces changing in that tool. We would not expect, you know, notice it's blocked whether that's still working or not. But if the output changes, if the trace of that output changes, that's something we would, you know, do basic changes, diagrams and contracts. Yeah, sure. So I have a question for you on, like, are we contracting on the inputs and the outputs? I think we're only contracting on the diagrams. So I think this, what James said leads back to the original conversation that should be saved on the use case instead of tooling, because as you mentioned, we can't really, the node core can't really work on something which doesn't even own necessarily. Yeah, so it's a bit for me, it leads back to, just to be sure that there's always a way to do what the user was doing in previous questions in future versions as well. Right, I guess just extending what you just said, it's like, you know, the project can say we need to have data exported which allows you to use the use case. So the tool may be broken, but December made a change, changing that field to change the accessible, acceptable but tooling hasn't caught up yet. But that might still be a case where you say, well, there's still a possibility. Yes. But you could try to say, but we're not going to break this thing that generates the data. It's not generating the data at all, but that's actually a problem. Let's hope you don't want to consume it or somebody should give you that something. Basically I'm saying that I think it's not good if you have a gap in diagnostics and for a specific amount of time. So I would definitely look on actually an idea, see if there is no way through what is a subtle thing to do. So let's have the same tool, but that has to be a way. Yeah, so this is a total like a movie question, but like I guess why are diagnostics not already first class? Because it seems like you're asking for permission in the first class and for me it's like a user. Oh my God, of course they should be first class. How can you shouldn't think about support for like diagnostics? Just curious why. Yes, that's a very good question. Can we have a good answer for it? I don't think there's a good answer other than it hasn't been. So there has been some diagnostics that they haven't been as heavily used and did use by some folks that Sorry, they haven't been as heavily used maybe not by users broadly, broadly enough by the community or we didn't also I'll come back to the testing part like it's not that there was the intent not to keep things working, but unless you have the testing place it's very easy to not realize even when you do have the testing place, right? So for me it's all about getting as much testing place as you can because then people see that no longer work. Early on some of the diagnostic tools are just not part of the core. The Google opens up a couple of diagnostic tools to generate, so we just use a lot more tools and we never started the conversation about maybe this is something that the core must own, like a primary to generate. So if the core doesn't own the tools then it's not something that it can retain. I think I can throw this question off. So two points on that. So we have the Google Directs generating playgrounds doing that to take that one thing. Going back to the thing with regards to supported diagnostics part of the reason that the trace of that API is experimental still and there's no path yet or this is we do not have use cases that and we can point to that to describe how it is always those work. We have some traces that were emitting but there's not a solid enough set of use cases around it how it's supposed to continue working. So I guess maybe it's bad to say what leads back to the beginning we need to document what are the use cases agree, what we support what we expect, how they are used to provide the best practices and then we can start to think about what it means diagnostic first test for no end support. So it looks like this specific question has a dependency first so I would recommend to do this. So the rest of the slides basically about just drawing some unicorn future basically having these always supported use cases between releases always having best practices and we are working towards the ideal journey and minimize the pooling gaps and basically there are other questions which we can pick some of them because we still have like 15 minutes which are related in this initiative you are basically one of them is how do we influence those online user journeys this is a question more we discussed not every diagnostic tool is basically in the notebook or kind of maintained by multiple so how we can influence something and have the contract for testing for example when it's owned by the community is there any idea commenter on this? So two points here like one is like these this is like such an interesting paradigm where like all of the open source community developing tools that are like extremely critical ecosystem and so I don't know what the approaches for like or what the process for like approaching them see if they'd be willing to kind of like you know make it part of the core and I'm very going to get a bunch of communicators too so and then like I guess the other question I have is around like the numbers you said that like people don't complain enough right because and it makes it seem that diagnostics is like less important than they're probably the first class I'm talking but the reality is like there's a sub sub sub there's a sub group of node users that are even going to use diagnostic tools right and so I think if you're if you're used to kind of like hearing a lot about complaints you can't like you can't treat if they don't have the same weight you know what I mean like they're never going to have the same weight they're never going to have the same user base and so ultimately like the threshold should be much lower than it should than it is I guess it references diagnostics because I think you have to it's a different like lens I mean you can't it's not it's not a tool that everyone is going to use but it's very important that people do use it and it's important for us Yes that's a very good point kind of onboarding more enterprise companies for node shares I believe would require more focus on diagnostics at least when we are reaching out to other enterprises that's a sharp opinion I can have another on enterprise so the other reason why the noise factor might be low is because a lot of these enterprises are companies are it's close close source and so there's in a culture of like open source or inner source you know what I mean so that's another thing to really keep in mind that could be wrong Yeah and there are definitely companies to for example they are just not a building node so they pick a version which diagnostic kind of works Yeah I agree with that So one question could be around this which summarizes this more that should we aim to move some tools under the node foundation reports and have more support from the foundation or it's not necessarily we can work around this a little bit I think a while similar to what we do with the community we have these tools that are out there we can go to the question and have their own test leads where they can basically self test against there and whenever we do our making changes we can basically run those test leads and so at the very least we have a heads up of work something like that I think is probably the best we should do at this point So it sounds like you are advocating on what a contract kind of test rather than owning those projects That's a fair point if we have good diagnostic APIs and if it provides the necessary data then the community what about that? I'm just reconciling that if you want to 0x 0x So it's kind of like the balance between there's a wider community of tools you don't want to get in the way of that if there's no community tool but it's really critical to know then maybe we should start out with a project not be forced to serve although we move a few things into that having some projects under the project itself doesn't make the test be easier How can we make that decision? I don't know if there's any like you're going to be able to say here are the rules I think it's more like if somebody says well this one I think is critical and they can make a case and see that it's not like just competing you know what I would actually stop that competition if there's no if there's only one project there's zero projects that do that it's an easier case to stay alive it needs more support it needs more development it's critical So it sounds like probably there is one project and maybe that's under resourced so maybe then when foundation can lift it off because it's still an important part of that decision okay that could be a reasonable way forward that was about some tooling use cases and just to wrap out this whole session basically one way to move forward would be so first of all we should call that use cases which I have no better idea then please start to put your thoughts into the document and let's see which one is, or maybe even plus one so we need to find some way how to identify the use cases you know what, not good plus ones now, just let's call that what we have and prioritize later then after the prioritization we should have kind of deep dives you know doing that before with the diagnostic working group kind of the usual bi-weekly so having we have the usual bi-weekly things when we are going through the in-progress action items but sometimes we do deep dives when we are picking one specific topics people are interested in showing up and discussing that so we could have a deep dive per use cases kind of discussing what would be the ideal world actually the use case looks like what we are missing today so that could be one way forward having one deep dive two weeks probably it could be a longer effort but I think even if we make progress on one to use cases that could be a good inshow with that and this is where I have this unicorn slice which I forget that everything is close to the ideal and always supported and everything is close to that form accessible and it increases the logistic usage I really hope we can reach this at one day thank you for the attention we have ten minutes till I finish earlier we can either give it back to the brain or if someone has a question then please raise your hand or maybe there is no break I'm sorry there is no break so sorry