 For one, we talked about profiling what it is and when and why you should use it, and now it's time to pick the right tool to use in Visual Studio to do so. How do we do that? Find out in part two of this jam-packed episode of Visual Studio Toolbox. Hi, everyone. Welcome to Visual Studio Toolbox. I'm your host, Leslie Richardson, and I'm joined once again by profiling PM Sagar Shetty for part two of our series on profiling. Welcome, Sagar. Thanks for having me, Leslie. Good to be back. Great. Last, in part one, we talked about just a generalization about what profiling even is, when you should use it, why you should use it, and at this point, I don't know about how to use tools. What are some of the tools available to us in Visual Studio that we can use to fulfill our profiling dreams? Well, you're in luck, Leslie, because this episode is all about tools and when to use them. To kick things off, within Visual Studio, at a high level, there's two sets of tools that you can use for profiling, and the first one is the performance profiler. Basically, the performance profiler is a suite of, you can think of it as a standalone set of tools that gives you the opportunity to do some really in-depth performance analysis, and you can use multiple tools in conjunction. How exactly do you use these tools and where do you go to find them? We touched on this a little bit in the last episode, and I'm actually going to screen share and go into Visual Studio here. Yeah, so I'm in Visual Studio, and to get to the performance profiler, there's a couple of different ways. The keyboard shortcut is Alt-F2. You can also go to Debug Performance Profiler, and that will also show you this keyboard shortcut here. And we get to this kind of a landing page. And so like I was saying before, it's essentially a suite of performance tools that allows you to really kind of drive your performance investigation for whatever the nature of your investigation is, right? So I thought we'd just kind of go through here and talk about all the tools at a high level and kind of when you might use each one. And excited to announce that we kind of have some newer tools up and running. So let's start with at the top here, the .NET Async tool. So Leslie, I know you're really familiar. We kind of joke last time about asynchronous code and kind of digging into asynchronous tasks. And we have a relatively new tool here, the .NET Async tool. And basically what this allows you to do is kind of really investigate the nature of your asynchronous code and your asynchronous tasks. So what this tool is going to show you is kind of your different asynchronous tasks, the start, the stop time, the duration of when these tasks kind of take place, how the overlap is occurring and kind of give you some more insight into how your asynchronous code is functioning. So if you're doing a lot with Async, this is definitely a tool to check out. And like I said, it's relatively new. That sounds great since like as is whenever you're trying to diagnose Async code, it's not the easiest. Yeah. Yeah. And really something we wanted to do here was kind of increase that visualization because like you said, Leslie, it's not necessarily easy to diagnose or track down where the issues are happening. So again, we're kind of bringing some more visualization to the Diagnostics Hub. And this is something that we didn't have before. So we're really excited for that. So the next tool is the .NET Perf counters tool. So this is actually a brand new tool shipping in 16.7. So the version of VS I have up right now is actually a preview build. It's 16.7 preview three. So if you're using, and I'll go ahead and plug this, like I'd highly recommend customers kind of start testing. This is an external preview build. So customers have access to this. And so you can go ahead and install that right now and go ahead and start playing around with their .NET Perf counters tool. But basically this tool is a tool that helps visualize and support .NET counters. And what are .NET counters? Basically they're kind of a way to look at a set of metrics that are kind of your initial start to your performance investigation. So these are metrics like exceptions per second, maybe some information around garbage collection, CPU utilization. And up until this point, the primary way to visualize and consume these metrics was through the command line. And so what we've done is kind of brought some of that experience into Visual Studio to give you some more deeper visualizations and insights we have a table to give you some more information about these metrics as well. So we're really excited about the .NET counters tool. It should be coming to everyone in GA kind of in the next month or so early August. So yeah, that's the .NET performance counters tool. And kind of moving along here, so the next tool is the CPU usage tool, an absolute staple in the performance profile, right? So we kind of talked about in the first episode how CPU investigations and kind of that bucket of like CPU usage and utilization issues is kind of a common scenario that comes up. So the CPU usage tool is really kind of the driving force and the go-to tool to kind of do CPU investigations. And basically, this is all about seeing how CPU time is being spent during your program's execution, kind of finding functions or modules that are taking up a lot of your CPU time and taxing your CPU the most. So that's a very, very common tool. And we're gonna talk about that one more and how to optimize that a little bit later on. Awesome. Yeah. Yeah, because that's not easy to track down sometimes. Yeah, yeah, and there's definitely a lot you can tweak with the settings there and we'll definitely go into that. Next tool, also sort of newish, but has been around for a little bit is the database tool. So the database tool is essentially, just looking at all the queries your application's making and your applications in many cases are making quite a lot. And so this is kind of giving you that sense of, what are all the queries my app is making and essentially how long are they taking? And this is something really good to point out here. So going back to VS, so you'll notice that there are checkboxes next to a lot of these tools and something I wanna stress is that you can use multiple of these tools in conjunction. So for example, using the database tool and the CPU usage tool might be interesting because the database tool is showing queries that your application's making and one might be taking a long time and then using the CPU usage tool that kind of gives you more into kind of expensive operations and what functions but I need to manipulate to kind of optimize that query if that makes sense. And so I definitely wanna stress, this is not just a one-off, you use one tool at a time, you have the checkboxes here, you can see that many of these tools can be used in conjunction. And so that's definitely good to keep in mind. So what's the catch? Is there a performance hit when it comes to like the time it takes to diagnose all this when with the more boxes that you check? Yeah, you got it. So there definitely is a little bit of that. And to be fair, so, you know, and we try to safeguard our customers a little bit. So as an example, you know, I checked box the memory usage tool here and you notice a lot of these other boxes gray out and that's because, you know, technically we'll touch on this kind of in the next episode when we're kind of interacting with the command line. You can technically run all these tools in conjunction, but within VS, you know, we try to safeguard our customers a little bit and say that, you know, for certain tools that are really, really expensive, we try to run them in isolation just so you kind of don't have a degraded experience. But yes, you know, even with some of these other ones, I still would recommend when you can run multiple, you know, go for it. If you're running into deep performance issues, sure. You know, maybe tone it back a little bit, but it's great that you can really run a lot of these tools in conjunction. And as we continue to develop the performance profiler and update it, we're gonna try to make a better and better seamless experience where, you know, playing with multiple tools at once really gives you those deep insights and connections between tools. Really cool, because I think for a lot of people, diagnosing your code is already a tedious task. So if you can speed it up and make it more efficient by having multiple tools going on at the same time, then why not? Absolutely, and especially sometimes when you're starting off your investigation, you don't necessarily know exactly like, you know, where the problem lies, is it memory issues, is it like CPU related, is it something database, async, right? So kind of being able to fire off a lot of these tools at once kind of helps you get going like you suggest. And so just kind of continuing on, so I believe we just talked about database, so events viewer also relatively new tool. So essentially this is looking at, you know, ETW and net trace-based events, things like, you know, module loading, thread-star system configuration and essentially kind of looking at a ton of different events and their corresponding properties. So this allows you to, you know, do all sorts of things like, you know, looking at logging messages and exceptions and things of that nature. So really kind of getting some more events into the Diacub, which wasn't necessarily there before. And then lastly, under the available tool section, for now, and just to kind of point out, so available tools are just talking about like what's relevant for this particular app I have up, we have other tools that in this situation are not applicable, but they are tools in the diagnostic sub. So we're essentially showing you for any app you have, you know, what are the relevant tools that you can use in that particular situation. Just wanted to make that delineation. But the last available tool in this situation is memory usage tool, another staple in the performance profile. This is actually one of two memory sort of related tools we have in the performance profile. The next one I'll talk about after, but the memory usage tool is really good for a lot of different scenarios. I think one of its defining characteristics is that it can be used in, you know, .NET, .NET Core, like manage scenarios. It can also be used in native scenarios as well. So it can be used in, you know, quite a variety of app types, mixed mode apps as well that kind of mix, you know, both native and managed. And so it's really good for just kind of tracking down memory leaks, code pathways that are allocating a lot of memory and et cetera. And kind of segueing into the other memory profiler, which is the .NET Object Allocation Tracking tool, kind of a bit of a mouthful, but would definitely recommend you check it out, especially if you're working on, you know, .NET Core applications, absolutely, because we've actually been updating this tool quite a bit. It got quite a bit of an overhaul on 16.5. We added some icons, some killer icons in there, kind of fixed up the back trace a bit so that it's essentially easier to, and the key with this tool, again, also is really figuring out, like what are the code pathways that are allocating a lot of code, and what, in this case, .NET Objects are taking up a lot of code. It's really geared towards, for you as the user, figuring out what are the actionable and tangible areas of my code, I can actually impact in order to kind of fix my memory problems. And also with this tool, you get some nice insights into garbage collection as well. So that's definitely something to think about. So what do you say, like the memory usage tool would be good for the C++ scenarios, which most people are familiar with, for vulgane around memory weeks? I really think of that one as kind of more of the native tool, but for the managed scenarios, exactly. Like I would definitely recommend checking out the .NET alloc tool, especially with some of the updates we've been doing. Cool. And that kind of rounding up the suite here, so we've got the application timeline tool and the instrumentation. So application timeline, this is really looking at kind of XAML applications and improving the performance there, things like improving the time spent rendering frames, servicing networks for your different XAML apps. Not one of our more commonly used tools, but it's definitely there and really there to tweak XAML applications in particular. And then lastly, instrumentation. So this is essentially kind of injecting code bits to better understand your code and certain tasks. So we kind of have two main kind of data collection methods within the performance profile, or one is sampling, we just kind of taking some data points at periodic intervals and kind of aggregating that together. But by definition with a sampling rate, there is kind of some granularity there and you kind of lose some of the data in between certain points and certainly you can increase sampling rate, but an instrumentation essentially is injecting code to get a very, very detailed set of data at a very fine tune point like timing interval. So essentially you're injecting code to kind of better understand various operations, so just reading, the writing, the disk and et cetera. Again, not one of our more commonly used tools, but it is there. So yeah, so that's kind of the performance profile. It's a suite of non-debugger integrated profiling tools that really kind of shows the performance that's most close to the end user experience. And that's kind of our first set. We do have a second set though. That's a lot of tools that I'm sure a lot of people were unaware of. So is all of this enterprise exclusive or can anyone use it? Yeah, so this is available to everyone. So community professional enterprise doesn't matter. It's available to all skis. Sweet. Yeah, so the second set of tools is the diagnostics tool window. And I think you'll really appreciate this Leslie, because the key thing here is that this is integrated with the debugger, right? So with the performance profile, what we're looking at is kind of more of a standalone set of tools. With the diagnostics tool window, we're gonna look at a few things that's really cool where it's really kind of playing nicely with the debugger itself. So I'm just gonna close out of this dive session here and going back to Visual Studio. We've got this app in the background. We'll talk about this a little bit more here in a second. And so I'm going to launch this app and I'm gonna start debugging. So a few different ways to do that. You can kind of hit this start green arrow here. You can hit F5. You can hit debug, start debugging. I'm just gonna go ahead and hit F5 out of habit. And so let me actually stop debugging here because this actually brings up a great point. So basically what this is saying here is that we're in a release build. So what I want to do is switch this back over to a debug build. And this is actually really important to bring up. So to use the debugger kind of in the most efficient and optimized sense, you kind of wanna be in debug mode. And it turns out that that's great for the debugger and we're gonna do this here. But it actually has a bit of an opportunity cost in the sense that the debugger has certain kind of operations that kind of degrade the performance of like the performance data. And it doesn't necessarily show you the most accurate performance data. That's not to say that these tools aren't relevant. We'll talk about this more in the next session when we talk about like when you would use each tool. But just keep that in mind. So for now, let's just continue and we'll revisit this point just to kind of have the best debugging experience. And so basically this app we have going on here is essentially it's a web application that's going over a ton of different ASP done that scenario. So looking at the app here, we've got retrieving and returning a JSON response, HTTP client to retrieve a JSON response, just some async operations, et cetera. And so going back to VS for a second here, what I want to bring your attention to is the diagnostics tool window. So this is kind of- That window that a lot of people like to close. Yeah, yeah, exactly. And if you do close it, you wanna bring it back up. I would bring your attention to the search bar and all you have to do is, you know, type in diagnostics tool window and then you bring it back up. As in a bugging gal, don't sleep on this window. It's a really cool window. There's a lot of great things in there, especially like the events tab, personally. Yeah, yeah, you heard it here folks, don't sleep on the window. So there's kind of four tabs here, right? And we'll come back to the graphs in a second, but basically we've got the summary, which is kind of showing an overview of different kind of events that might be going on. We have the memory usage pane so we can take memory snapshots. We have a CPU usage record profile. So let's kind of walk through some of the other like main tabs. So events, we don't have a whole lot going on right now. You know, we just kind of started our apps. We've got some basic output. I'm gonna run a few of those scenarios we saw on the web app in a second. So we'll get some more interesting output here. Memory usage, we haven't done anything again yet. So I'm just actually gonna go ahead and take a snapshot just so we get a bit of a baseline. This will come back into play a little bit later. And so we have some basic information on kind of the time elapsed, some of the objects we have allocated and how much memory they're taking up until this point. And then CPU usage, we need to set some breakpoints which I'm gonna do here in a second. But once we do that, we'll be able to get some more information on kind of the cost of specific operations in between breakpoints when our application is in a break state. But for now, you know, we haven't really done a whole lot with the app. So I'm just gonna go back to the app, you know, hit a few buttons, do my thing, and then we'll get some more information. So I'll just make a few of these JSON calls here. Again, this is just a web app that's going over a few basic scenarios. I'm gonna spam that a few times. I'll make a few more requests. And this is just gonna populate some more interesting data for our app. Pokemon resources. Yeah, exactly, exactly. So we'll do that a few times. We should have at least a little bit of data to work with here. So now I'm going back to VS. If we go to events, hopefully, yeah, we see a lot more, right? So we see that, you know, a controller is invoked. We have kind of the big JSON content three. We have an HTTP request, right? We have a GET. So we have a little bit more events now as we can see after I actually operated a little bit more within my app. So this is gonna give you some basic information on kind of what's happening here. It's also reflected in the summary. Memory usage, you know, we took that snapshot before we did anything with the app. Let me take another snapshot here again. And what we'd expect is, yeah, right? So we have way more objects. They're taking up way more memory and we kind of have like this difference here. And what's really cool about this is if you kind of click on some of these, so like this, for example, this is a different port and it's gonna open up this table here. And now you're getting a little bit more insight into kind of what's going on under the hood. So what's cool about this particular report is this is kind of what we call like a diff report or comparison report. So you have kind of the different object types that are allocating memory over here. And to the far right, you have, I can just start with the standard report. Actually, perhaps that might be a little bit more easy to grasp. So at this particular moment in time for this snapshot, you have like the different counts for those particular object types, like the size of like how many bytes they're taking up and like the inclusive size. So everything within that category. And so with the diff report, what you're doing is you get those columns, but you're also getting the comparison compared to the first snapshot. So we can see exactly how much the difference was between snapshot one and two. So when we had kind of just started our app and then kind of when we did a little bit more exercise our app, and we can see kind of the exact difference of exactly how much more memory we allocated. So I wanna go ahead and plug here, and I'll probably plug this a few times. You've got docs that kind of go into these reports a lot more. And so we'll definitely be linking that below, that give you a much more thorough kind of explanation into a lot of these columns if you wanna kind of go back later and play around. But that at a high level is kind of what we're doing with these memory reports. And kind of the last tab I wanted to talk about is the CPU usage tab, right? And so something we alluded to before, Leslie, is that this window is sitting within the debugger, right? And we know the debugger has got tons of different tools, perhaps there's nothing more fundamental than the breakpoint, right? So what I wanna show here is kind of how we play with both worlds, both the debugging world and the profiling world. So to do that, I'm gonna go into my code a little bit. I'm gonna go to this JSON output controller file. And I wanna set some breakpoints, right? So on the web app, when I was clicking those buttons, it kind of goes back to these particular tasks. And what I want to do this time is when I click one of those buttons, I wanna trigger one of these breakpoints. So I've set a few here around this task. And now I'm going to go back to my web app. Just going to the top here and I found a scenario. This is the scenario we want to click. So I'm gonna click on this and it's going to hit this first breakpoint. And now I'm gonna continue and run to my second breakpoints. And now we see the CPU usage loading up. And let me go ahead and maximize this just so that everyone can see. Good old pie charts. Good old pie charts. So in this case, this is looking at top five categories. And in this case, since we're looking at CPU usage, just looking at categories that are taking up a lot of the CPUs like bandwidth basically. So taking up a lot of the CPU utilization. And within this view, and you get a kind of a similar view within the performance profiler as well. These tools are pretty similar between both. But kind of at a high level, what we have is kind of some functions of interest that we should be looking at, taking up a lot of the CPU, like a service like this kind of taking up 50% of the CPU and also some hot paths. So hot paths are essentially areas of code that you should go back and really look at because they're areas that are in this case taking up a lot of the CPU usage. So you kind of have a few different looks here. You have the functions view, we're just kind of showing you specific functions to look at, and then also like the scenarios of how you drive down into the code to get. So I mean, having this tool is cool enough, but on top of that, Visual CPU will help you out further by just giving you the critical areas that you might want to check out. Absolutely. And then that's really important, right? Because it's like at the end of the day, we want to make it as easy as possible for customers to figure out like, not just what the problem is, but like where it is and how they can fix it. So that's exactly what we're seeing here. And like I said, again, we'll plug, you know, plugging the ducks will link them. And there are some like more reports that these can go into in more detail that show you a ton of different other scenarios too. But yeah, that's kind of the diagnostics tool window. So as we saw there, you know, playing a little bit with the debugger as well, kind of directly the break points were kind of triggering information here, profiling information here. So kind of the two worlds coming together. Yeah. So one question I had was in the editor, when you placed the two break points and you ran at five and everything, there is some inline information that tells you how many milliseconds have passed in between the time of the first break point to the second break point. I think that's an enterprise exclusive, but if you click on that, does that lead you to the same window just as another entry point that you can use? So let me just make sure I understand Leslie. So we're back in the code here. We're looking at this particular task and you're asking about this perf tip. Like if I click on this, is this enterprise only? So yeah, I believe this is enterprise only. Yep. Cool. But yeah, so if you do have enterprise though, it seems like a good way to segue into that diagnostics tool window immediately, right? Oh, absolutely. Yeah, absolutely. And one more thing I wanted to bring up is the diagnostics tool window, kind of going back to the swim lanes a little bit, right? So swim lanes, I'm referring to these graphs up top. So one of the kind of the critical aspects to the swim lanes is that you can time filter. So you can kind of select a time range. So here I kind of already have a time range selected. I can modify that to whatever I want. And the important thing here is that when I select a new slice of time, it's filtering my data across all tools to that particular time range. So if you're trying to do some analysis across multiple tools, memory usage, CPU usage, et cetera, you can get that time slice for exactly what you want. And then go back and investigate the data across all tools for that particular time. So just wanted to make sure I kind of threw that in there as well. So what's the order of operations on that? Can I select a segment of that swim lane and then go into the memory usage tool and run that and it will automatically filter it? Yeah, so I mean, you have some flexibility here. If you've collected the data, you can always go back and just time filter after that. Yes, so that's kind of like a high level overview of like the tools available within VS. We kind of talked about the performance profiler being more of a suite of standalone tools, the diagnostics tool window being much more integrated with the debugger. Cause, so for me, since I find myself debugging more often than not, I'd probably be looking at the diagnostics tools window more. So, and you kind of already touched on it a little bit, but where's the circumstance where I'd want to pivot and use the performance profiler profile I need? Absolutely. So one of the things that we kind of touched on earlier is when we went to the debugger and use the diagnostics tool window, we kind of switched from the release build to the debug build. And kind of coming back to that point, I would highly recommend when possible, customers use the release build and performance profiler. And the reason for that is that data is the most precise and accurate. The release builds provide optimizations like inline functioning calls, kind of pruning unused code paths and kind of storing variables in ways that the debugger can't. The debugger has its own unique operations in terms of how it looks at exceptions and kind of loads modules that is critical to how it functions. And unfortunately, because of that, it kind of changes the performance times itself. So when possible, try to use release builds and try to use the performance profiler because it essentially gives the exact or like most precise data that your customers are going to end up seeing. Furthermore, for different scenarios, like external performance problems, like file, IO or network issues, you're not going to see like much difference between the two tools. So either maybe find again, if you can help it, probably use a performance profiler. For CPU intensive calls, sometimes there could be a significant difference in performance between the two. So definitely check to see if the issues exist in release builds first. Again, because that's going to be most closely what your customers are seeing. So that will also be the performance profiler. I guess the real thing with the diagnostics tool window though, is that if you're running into some sort of scenario where you've got some sort of problem that's coming up within release, like within, sorry, within debug builds in particular only, and you're trying to replicate that bug, that's when you'd really want to use the diagnostics tool window. Cool, that makes sense. Yeah. So with all of these tools, I'm sure there's got to be some drawbacks or just some issues that users might experience when trying to profile their code, right? So are there any like current existing workarounds or ways to optimize this experience to be the best that it can be? Absolutely. So yeah, I want to go over a few optimizations here. Again, plugging the dock, and by the way, all the docs will be in the description below for this video. We've got a doc actually dedicated to literally just this concept about optimizing profiling settings. And I kind of want to touch on a few high level points and a few tips. So let me go back to VS for second year and kind of stop debugging and kind of walk you through some of the different tips and things you might want to consider when profiling to kind of optimize the performance of our tools. And so the first thing I want to do is go to the performance profiler. So again, in this case, I'm just going to go Alt F2 and we've got this up. And let me just go ahead and say that, we do the best we can to provide the smartest defaults possible for our customers to really kind of help them get going. So I say that to say that, the defaults may be good for you in many situations. So kind of tweak these and be cautious when you tweak these because you may not necessarily need to. Only do this if you have to, but these are just some ideas. If our tools aren't acting the way you want or performance is a real issue that, you can have some of this more configurability if you'd like. So I'm in Alt F2. And first, a few of our tools like the CPU usage, the Donut Object Alec as well. We allow you to adjust sampling rates. So how do you do that? So I'm on the summary page and then I click on this gear icon. So I'll just show it with the CPU usage tool. So if I click this, I get this properties page, right? And essentially, this is adjusting the sampling rate. Essentially, how much data am I capturing per second? Like I said, we try to do the best we can with reasonable defaults. I personally haven't had to mess with this much. Of course, I'm not doing it as complex investigations as our customers, but if you do need to change the sampling rate, it is here. This is one area I might start with. We have low, we have high, we also have a custom option. For now, I'm just gonna leave it on default, but again, just wanted to call that out. Kind of changing the sampling frequency might be a place to start. Yeah, so can you change the sampling rate via the diagnostics tools window? Yeah, so it kind of brings you back to the exact same Profile Properties page and you can do that as well. Yep. Sweet. So we kind of talked about adjusting a sampling rate there. The next thing I wanna talk about is just kind of keeping your trace duration short. So there's not a whole lot to show here, but I mean, I'll just run through a very quick demonstration. So when I check the tool, the CPU usage tool, I can start the collection in the Profiler. But the key thing to note here is that when this is collecting, right, and my app is gonna load up in a second here, but this is collecting a ton of data and it kind of just went ahead and stopped it. But basically, I would recommend you keep your trace duration under five minutes. We try to recommend that to customers because it's pulling in so much data that if you keep that trace going for a long time, it's going to take forever to bring up those results. That was like literally like a second to obviously load it very quickly. It doesn't need to be that short, of course. But my point is, you know, just try to keep the trace duration a bit short. And then the next thing I wanna talk about is the concept of kind of, we kind of call it just my code internally. But basically it really at a high level, it's the concept of user code versus external code. And basically many tools within the profiler kind of have this concept of user code versus external code. And essentially user code is anything that's built locally or rather built by like an open project or open workspace that you have up. And external code is everything else. Why is this important? So at the end of the day, you know, Leslie, something we've been talking about is we want customers to be, like we want to create actionable insights for customers, right? Things that they can actually impact and have control over. And external code, examples of external code might be, you know, like third party dependencies or libraries, things that, you know, you're importing in and you don't have as much fine control over, right? So, you know, if there's a performance issue with something third party, you know, maybe that's important to call out, but at the end of the day, you don't have control over optimizing that so much, right? So with showing or not showing external code or showing just my code, what that allows us to do is collapse all that external code, and I'll show this here in a second, into a single external code frame. And what that allows us to do is essentially drastically reduce the amount of data processing that's needed. And it just makes our tools kind of work a lot better. So how would I actually enable that setting? So there's a few different ways. Again, you know, we're going to link the docs which shows you as well. But in the case of the CPU usage tool, you'll see this show external code kind of box here. You know, there's a checkbox. I had it unchecked. So because show external code is unchecked, when I come over to my functions, everything is nicely collapsed into this one frame. Had that not been unchecked, it would be a lot more and it would have taken a lot longer for this to load. For other tools like the dotnet object allocation tools, we have a button that literally says show just my code that does the same thing as unchecking the show external code box. But just wanted to call that out. And then kind of the last setting that I'll talk about is optimizing profiling settings. So I've been plugging the docs a lot. You know, why don't I just go ahead and show the docs real quick? Just for this one little bit. And this is going to be one of the docs that we link here. So this is literally the optimizing profiling settings doc. And you know, it kind of talks about at a high level the different points we already covered. But kind of want to focus on this last bit, which is the symbol setting section. So this you can access right within Visual Studio, which is debug option symbols. If you go through that pathway in the context menu, you'll get to this page. And essentially, you know, symbols can give you information that can help you with investigating exception issues or allocation issues. But they have a significant impact on how long it takes to generate results for the tools. And so what's happening in this particular menu is that we have kind of symbol file PDB location. So PDB files are essentially where the symbols are stored. And essentially the symbols are very expensive to load. So what we try to do is cache them as much as possible, so you're not having to constantly reload them. However, over time, you know, sometimes symbols can really slow things down. If you're struggling to load this, essentially what you need to do is make sure that, you know, you might want to consider turning off like some of these symbol servers. So in this particular picture, you know, these symbol servers are off, but you might want to also empty the symbol cache. For the profiler, the way it's kind of designed right now, regardless of your symbol loading preference here, it kind of always has to load all symbols, which can be pretty taxing. But sometimes you want to just rely on local symbols and not have to load all this stuff. So again, maybe this is not the first place to look for optimizing settings, but something to consider, you know, check out our documentation, plugging it here again. But yeah, that's kind of the last one. So we technically talked about two different tools. One being the performance profiler and the other being the diagnostics tools window, but each of those windows had many tools within those larger windows, right? So what is the biggest takeaway you'd want viewers to get from this very broad tour of those two major tools that we're gonna dive into in later parts? Well, I hope kind of to your point that they feel like we have a pretty comprehensive tool set, right? We have a lot of options for you and hopefully you feel that regardless of the nature of your performance investigation, we have some tool for you somewhere within, you know, the diagnostics tool window or the performance profiler. And if you feel like we don't, we definitely love to hear your feedback and kind of work on that, right? We're constantly trying to make sure we have a very comprehensive experience. So absolutely would love to hear that feedback too. But yeah, and also the fact that not just we have a bunch of tools, but you can use them in tandem, right? We saw in the performance profiler whether it's something like checking multiple boxes and kind of running a few tools side-by-side. We saw with the diagnostics tool window kind of bridging the gap between the debugging world and the profiling world, right? So using either, you know, just multiple of those tools together with like the CPU usage and memory usage tool in that window with time filtering or just like break points going to the CPU usage tool. So essentially, yeah, to recap the big takeaways, hopefully you feel like we have a lot of tools that work well for you and that you can use them in tandem. And that, you know, you feel like we have adequate documentation, you can optimize the settings well to your needs, of course. That's great. So from there, when we get to our next part, just as a mini teaser, what are we gonna talk about? Yeah, absolutely. So with this episode, right, we were focused, it was all about tools, right? And a little bit of tips here and there with settings. What we wanna talk about next though is kind of how do you interact with these tools outside of VS and also taking this to the next level? How do we profile in production, right? I think that's definitely a question that comes up a lot and it's very, very important for obvious reasons. A lot of the developers were working on, you know, work in production environments and they wanna figure out, you know, how do we use our tools, not just within the context of VS itself, but also outside as well. So our great colleague Esteban is gonna be working on that next episode and so excited for that. But yeah, we'll be getting to that next. It's fun. So thanks for joining me once again, Sagar. Great to be here, Leslie, thanks for having me. Yeah, and hopefully everybody has been enjoying this series so far and is ready to take their profiling to the next level. So join us next time for part three. Until then, happy coding.