 In part two, we talked about the various tools that you can use to profile your code in Visual Studio. So what happens when you need to use those tools when you have an application that is out in production or is remote? Find out in part three of our series on profiling in Visual Studio Toolbox. ["Dance of the Beast"] Hi, everyone. Welcome to Visual Studio Toolbox. I'm your host, Leslie Richardson, and today I'm joined by profiling PM Esteban Herrera. Welcome, Esteban. Thanks, Leslie. Great to be here. Great. So last time I was speaking with Sagar in part two about how to choose which profiling tools you should be using and when. So in part three, what are we gonna be talking about today? Great. So yeah, I'm here to talk about how to run those tools without Visual Studio, specifically how to use those tools to profile in production. That's cool. So you don't even need the IDE open to use these tools, really? To collect the data, you don't. It's kind of a two-part process. The first part is collecting the data, and for that, you don't necessarily need Visual Studio, but to actually open and analyze the data, that's where Visual Studio will come in. Awesome. So this probably has something to do with profiling in production, right? Exactly. Yeah, profiling in production is something that will allow you to get much more realistic data. You're actually profiling exactly what your users are experiencing, as opposed to profiling on your Dev Machine or in your staging environment. And it allows you to see issues that don't always show up in those environments. So this sounds really important, since I bet a lot of people don't bother to use the profiling tools until a user who's been using their app that's live is telling them that there's a performance issue, right? Exactly. That's something that we hear quite often, that profiling is something that people don't tend to think about until something's on fire. Cool. So up to this point, we've kind of been talking about how you can use these tools in a local environment, but once your app is out in the world and in production, what is different about that profiling experience? Sure. So there is a few things that are unique about production environments, as opposed to your Dev environment. In the previous episodes, you saw how to run the tools by starting the application with the tools kind of aimed at your application. But in production, you don't really have the ability to start and stop your process in order to do that, because that means that users are seeing that started and stop each other time. Your overhead when profiling in production also is really important because you don't want to slow your app way down when you're collecting this data again, because users are going to see that. And one last thing that's unique about production is that you could be potentially running these on huge multi-core machines with a ton of volume. And so the traces that you collect on these machines have the potential to be really big. So these are all things to keep in mind when you're collecting profiling data in these scenarios. Great. So yeah, it sounds like those are some pretty unique challenges that you probably wouldn't even think twice about if you're just profiling locally. So with that, how do you even start to profile when you have an app in production? Sure. So we're going to go over three ways and really dig deep on one of them to profiling production. The first way is to run the tools from VS like Sagar explained in the previous episode. But if you have Visual Studio installed and the same machine that is in production, I have some bigger questions for you because that's not a very realistic scenario. But it is possible if you happen to have Visual Studio next to that application on that same diagnostics tool window that comes up when you hit Alt-F2, there's a dropdown that lets you target which application you're going to profile and you can actually attach to a running process. So in that scenario, that's how you would use Visual Studio. So once you attach the process, are you able to use any of those performance profiles as usual or are there any limitations now that you have a process that's not running locally? There are a few limitations and the diagnostics tool window will let you know when you select the running process, the tools that are available to run will be a great out or not depending on what the process is and what tools are available. But our basic tools that should kind of give you a picture of what's really happening or at least what are the next questions you should be asking, those will be able to run. Sweet. Yeah, and so I'm going to jump into the second slightly more realistic way of profiling in production. And while I do that, I'm going to share my screen with you. And so this way of profiling involves using something called vsdiagnostics.exe and this is a command line tool that's available to install on any machine through the download link that will be in the description of this video. You download what we call the remote tools for Visual Studio and this will be included in there. And what this tool will allow you to do is run the same tools that come with Visual Studio in the diagnostics tools window just from the command line. You can collect that data, output it to a file and then open it on Visual Studio on any machine that has Visual Studio 2019. Where do I have to go in order to download the remote tools? It's available through the Visual Studio website and the link will be down below in the description. Yeah, so once you download it, running the tool is actually fairly simple. It's just a set of commands. You start by running the executable. You have to tell it which process to attach to. Well, so you say vsdiagnostics start, you name the diagnostic session in this case, we'll just number it number one. And then you tell it what process to attach to and for that you just need a process ID. I have a process already running. It's the same application that we've been using throughout these episodes to demo some of our tools. Let me grab the process ID real quick. It's 29848. And then I have to load what we call the agent configuration. This is kind of where some of the magic happens. This is a file that describes what collection agents, which is another word for a synonymous with tool in this case, we're going to be running and then allows you to tweak some of the settings for those agents. So in this case, I'm going to run the CPU agent, which is synonymous with the CPU usage tool that you see in the diagnostics tool window. So can this work for both managed and native applications? Yes, it can. There's a table that again will be in the descriptions, but there's a table in our docs that describes exactly what all of our diagnostics tools can run against. And for most of the tools, it can run both for native and managed products. Okay, so it looks like I accidentally deleted my line, but... As one does. I happen to have it copied. I just need to update this process ID. And as soon as I do that, I will 29848, I will just hit enter and I will start collecting data. So now I can go ahead and drive my application or in production, let my users drive my application for me. And when I'm ready to stop collecting, I just tell the tool to stop that session, which we named one, and then I tell it what the output file should be. Again, I will go ahead. Well, it not automatically stopped. What could you tell it to trace the code for five minutes like you do by default with the performance profiler, or is this a case where you need to stop it manually? This is a case where you need to stop it manually. And this is something like I mentioned earlier where you need to be really careful because this file can grow to be very, very large if you forget about it. Oh gosh, yeah. So don't go brew some coffee during this time for too long. Just the same way as you want to be careful with anything in production, you hopefully when, by the time that you're profiling something in production, you have somewhat of an idea of exactly what the question you're asking is. And if you don't, then I would suggest starting with the CPU usage tool first and trying to see if you can spot any problem areas or what are the activities that are happening in your application when your issue arises and really try to narrow that down so that you're not creating some huge overhead in your application in production. So yeah, to stop collection, you just tell it to stop. This is the diagnostic session one that I named earlier. And then you just tell it where to output that die accession file to, I hit enter and the collection stops. So at this point, I can go to that output directory and open that file and I will get the full Visual Studio experience that I would get as if I had collected it through the UI. So using this method via the terminal, it's good for when you don't have Visual Studio like right on hand, right? Exactly, yeah. So this kind of the scenario that we have heard and see happen is that you have access to a production machine, you don't have Visual Studio on it, but you have the ability to download these remote tools. So with those tools, you have access to this executable, you can run it, output the file and then through FTP or whatever way you wanna transfer that file, you would move that to a machine that has Visual Studio and do your investigation there. That's great. Earlier it's breaking that collection part apart from the analysis part. Sweet. So like back in the day when offices were still kind of relevant, this would be nice if you went home and then you noticed there was a problem but you still have like a work setup in your house that you can use this method on, right? Exactly. And something else that we hear a lot is that the people that are supporting an application or trying to debug it in production are not the same people that are necessarily developing the application. So sometimes you might have one person collecting the trays, kind of working through an issue and then passing that off to a developer that's gonna look at that data and try to diagnose what's going on. Nice. That's pretty convenient. So Leslie, I wanted to touch on those agent configuration files before we move on and just show you inside one of them. They're fairly simple files. We're looking at one right now. It's just a JSON file that tells our tool which collection agent was gonna be running. In this case, we ran the CPU agent. And then it allows you to configure some things about that like the sample rate. And these are the same configuration options that you see when you look at our tools through the UI available in Bishop Studio. That's great. And we have samples of all those configuration files. Yeah, we have samples of those. This is a file called CPU usage high, meaning we're sampling 4,000 times a second. Our usual is 1,000 times a second. But we have a bunch of example agent configurations. And then for some of our advanced users, we actually have the ability to create your own collection agents. And... What? Yeah, that is... Nice. That's something for a much deeper dive. Maybe even episode in the future. But... That's really cool. This is how you would kind of let our command line tool know about your collection agent. When's the circumstance where a user would want to make a custom agent? Is it like if they're in an enterprise setting and there's specific things that they need to diagnose with their code, then they can make their own. Is that the best scenario? Exactly. Anything that your environment can kind of emit as data, we can listen to. And we can listen to a combination of things. Like maybe you're interested in collecting both CPU and memory data at the same time, or you want to do some transformation to it automatically, you can create your own agent here. It's really pretty versatile. Can you expand upon, for example, the CPU agent, like take that base agent and then build on top of it? Potentially, yeah. Nice. It starts... You start digging into some pretty deep internal stuff there. But it's definitely built in a way that we can kind of plug in and play with whatever, if you want to write your own analysis piece or your own collection piece. This is built in a way that allows you to do that. So if users want to learn more about agents since it seems like a whole different rabbit hole, is there a doc for that, I'm assuming? There is a doc for that. It's fairly young and it's kind of a live document at this point. But if you reach out to me or Sagar, we can try to answer as many of your questions as we can. And then when they get too complex for us, well, we know who the right people to ask are. But we can connect you and get you on your way for sure. That's really cool. Yeah. So once you're done collecting and you've output a file, you can move that file to anywhere that has Visual Studio and open it with Visual Studio and you'll see the same analysis piece that you would see as if you had collected with the UI. And I'm gonna show you that right now. I'm gonna switch over to Visual Studio. So now I'm in Visual Studio, I navigate it to the directory where I output that DIAC session file to. And if I click and open it with Visual Studio, after a little bit of processing, you'll see our data start to build these pretty graphs and show you useful information about your diagnostic session. Awesome. There we go. Here's a graph. In this case, because I didn't drive our application too much and I was, like Sagar mentioned about just my code in our previous episode, I didn't tweak those settings here. So this is not particularly interesting, but I'm just demonstrating the fact that you can collect data in this way and you'll get a DIAC session file that opens in Visual Studio. That's really nifty. Yeah. The other cool thing is depending on what tools you were running at the time, Visual Studio will be clever and we'll show you the correct visualizations over that data. So for example, I was running the dotnet allocation tool it would show me that page right now. Gotcha. Yeah. Cool. Similarly, in the terminal, if you were to specify multiple agents, then it would show multiple windows if you tried to open up that trace in Visual Studio. Exactly. And this brings up a good point. In Visual Studio, in the diagnostics tool window, when you select certain tools, you'll notice that some of them get grayed out and that's kind of our way of protecting the user from running a collection session that has a ton of overhead because this is potentially gigs of data that you're collecting in like a 30 second time span. And so it's important to kind of know what questions you're asking. This command line tool does not have that same protection, right? You could potentially run all the collection agents at the same time and generate a ton of data and a ton of overhead. But so it's kind of, this is a sort that cuts both ways. It allows you to collect some pretty interesting profiling sessions, but you should be careful that you're not running too many collection agents. So at the beginning, I mentioned I'd be talking about three different ways of collecting this data in production. And for the last way, there's a great episode by our peer, Surab Sharadi, on a show called on.net. Again, that will be linked in the description and he does a deep dive into a tool called .net trace. Yeah, this is the way to collect trace data for any .net core project, anywhere that .net core runs. So this is our cross-platform way of collecting data. Great. Yeah, it's a very similar experience as the BS diagnostics window or tool. You download the tool either through the command line or through the NuGet gallery. You pointed at a process through the process ID. In this case, you tell it what event providers to listen to. So it's a slightly different mental model. And in the first case, you're naming collection agents that are gathering the data. In this case, you're telling it what event providers to listen to their data and keep it. Are those gonna be the same as the agents like CPU usage or memory or allocation tool, that sort of thing? Not quite. It's going to be, we have some sample files. And I think Surab mentions them in his episode, but there is some same provider configurations and flags that you provide the tool that will enable some of our tools to open those files. So it's not quite a one-to-one CPU collection agent. Like that will translate to one or two or maybe several event providers. But once you have those providers enabled, you will be able to light up whatever tool in Visual Studio. Okay, so this is really good for, like you said, the cross-platform scenarios. Exactly, anywhere that.NET Core can run and you have access to a command line on that machine. You can install this, run it, output a file. In this case, I believe it's called a net trace file and then move it to a machine that has Visual Studio, open it and get all that same great UI that allows you to dig further into our code. Plus, who doesn't love a great cross promotion on top of that, show-wise. Exactly. That's a great episode. I watched all of it in preparation for this. Nice. Shout-out to Sir Rob. Shout-out, exactly. Hopefully with the combination of the .NET trace tool for your .NET Core projects and the vsdiagnostics.exe for any of your other projects that are running on Windows, you have access to enough tools to kind of answer those initial questions about your investigation. Yeah. I didn't realize there were so many entry points when it comes to profiling your code that's already been shipped in the world. So that's really nice to have since you can feel kind of restrained whenever you have to try to diagnose code that's already available to the public without having to shut it down first. Exactly. And we're always working to make this a richer experience. I know there's a lot of work, especially in the .NET trace tool that's happening right now in order to turn on more and more capabilities, but this is hopefully a very rich experience for our users. And we'd love to hear any other ways that we can make this better. And the profiling team, what are some ways that you're currently trying to make those areas better? There's a few. Performance is always on our list of things to do because if we can shave a few milliseconds or seconds here and there when showing your data or collecting things that can translate to minutes and hours of time if you're doing that every day. We are, each individual tool, we're trying to polish that experience, make it easier to tie your data to actual code. So we're not only wanna help you ask and collect data to answer your question, we want to go one step further and show you where in your code you can start to look at changing in order to address some of those issues. So in a lot of places in our tools and Visual Studio, we're adding the ability to right click and go to code and we'll hopefully take you right to the function or the line of code that is responsible for that activity. It's cool, so it's like you have the hot path and then you have the hotter path. Exactly, yeah, it's all about shaving those seconds and trying to get you, trying to shave away all the superfluous data and just show you exactly what is going to help you solve your problem. Well, I look forward to seeing those changes as I'm sure a lot of people who experience these issues on a regular basis do as well. Yeah, and I'm excited to hopefully come back on and share a whole batch of improvements sometime in the near future. Yeah, definitely. So speaking of next time, what are we gonna talk about in the profiling world in part four? Well, Leslie, now that we kinda covered a little bit of introduction and how to run the tools both from within Visual Studio and outside, we're gonna start going down each individual tool and doing a really deep dive on how to really become power users of each tool available to us and help do those deeper investigations. Sounds like a lot of fun, especially because in the last part, we talked about a ton of tools on a short-ish time frame, so it can be good to narrow them down and do a focus episode on each. Exactly, yeah, this is a very broad topic and so we kinda got the groundwork out of the way in these first few episodes and we're excited to kinda do a deeper dive on each tool. All right, looking forward to it. So thank you so much, Esteban, for coming on and talking about profiling. Seriously, it's a tool that a lot of people should definitely check out if you haven't already. It's extremely useful. Yeah, it was a pleasure to be here and like you said, I would highly encourage people to mess around with these tools before something's actually on fire because it can be kinda stressful trying to learn all these things in the moment. In the moment, yeah. So even if this is not something that you think about every day, using these tools might actually surface some easy, low-hanging to make your app more performant and if nothing else, you are familiar with the tools for that day that I'm sure will never happen because we all write perfect code. Of course. But if you're ready, if by some miracle, a bug makes it into your code and you need to use these tools. I leave all of my applications in release mode by default because I know I'm never gonna mess up my code. I don't need to debug. Same thing. Yeah, all right. Yeah, well, thanks again and tune in next time for part four and until then, happy coding.