 Hi, welcome to Visual Studio Toolbox. I'm your host, Robert Green, and joining me for part two of our discussion on building high quality Visual Studio extensions is Omer Aviv of code value, authors of the OZ code extension. And in part one of this discussion, we talked about building more testable and more stable extensions. And in this part, we're going to talk about more performant, more memory efficient. So again, continuing our discussion on best practices for building extensions. But so far, a lot of what you've been talking about is just good practices for building high quality code in the first place. Absolutely. Whether or not you're building an extension. Absolutely. That in Visual Studio extensions, the bar is set a lot higher. Right. And particularly about what we're talking to be talking about today, performance and memory efficiency, there's different constraints in Visual Studio extensions. Like, Visual Studio is a 32-bit process. And we're sharing that process with a lot of other people. We're sharing it with Rosin. We're sharing it with other Visual Studio extensions. Everybody goes to that dotnet garbage collector and says, gimme, gimme, gimme. I need some more. And that garbage collector, if you don't treat it nicely, it will eventually just create a very bad experience for the end user where you slow down, you freeze. And that's the worst thing Visual Studio extension has to deal with, right? Is that you do all this work, you create a really great Visual Studio extension. And eventually, people have such a low bar nowadays for things being slow because they're used to their phone just going like this, that if your Visual Studio extension lags just a tiny bit, the user's itchy trigger finger on that uninstall button is really fast. So today, we're going to talk about making our extensions performant and memory efficient. And you notice that in Visual Studio 2017, if your extension slows down, particular things like the initialization of Visual Studio or when user types in stuff into the editor and you get the buffer change event or a tool window opens up, if you are too slow in doing any of those things, Visual Studio will actually tell the user that the extension is running too slow. And you don't want that. So two ways to deal with that. First of all, you want to delay your extension load as much as possible. So for example, Auscode is a good example of this. Auscode is only helpful when you have a C-sharp project in your solution. And it's only helpful when you're debugging. So up until that point where you actually started debugging a C-sharp project, we won't load. The way we do that is you use the UI context rules, which are attributes just like these ones right here, which we apply on the Visual Studio package to tell it what's what. The second thing you want to do is you want to load your package in the background using the async package pattern. What that means is that the initializer of our Visual Studio extension, instead of initialize, we have initialize async. We derive from the async package. And we have an async code here that basically returns a task that's completed once our Visual Studio extension is up and running. So how do you manage? So delay loading your extension says that when I start Visual Studio, you haven't loaded yet because I may or may not be using you. Yeah. That's probably a good thing. But then when I need you, you're not loaded yet. So all the delay in loading occurs right when I need you most. How do you manage that? How do you balance that? If you just started debugging right now, your finger went for that F5 button, it'll probably take you more than a few seconds until you actually use the features of the product. So UI context rule could be very fine-grained. It could be when the solution is loaded initially. It could be when you start debugging is when a specific project has been loaded, et cetera. So Visual Studio can and you have very fine-grained control. You can do very complicated UI context rules to know exactly when you want to load. And you want to aim for just before the user might want to start using it. And the initialized method of your package is something that should take no longer than just very few moments. But even if it's just one second or two, if it's running on the UI thread and you're not using the async package pattern, Visual Studio will tell the user that you're slowing them down. And the most unfortunate thing is that that might not even be your fault necessarily, because if you're using the old pattern of initializing, and you're asking Visual Studio for different math components that you need in order to initialize your package, because of things like UI thread re-entrancy and you might trigger all of the math tree of components loading at that particular moment, just because the order in which packages are loaded into Visual Studio is not deterministic and you might just end up being the first one. So when you ask for dependencies, Visual Studio has to say, oh, you want that? Okay, I'm going to do that. Hold on. And that will all add up into the user ending up seeing that message about your extension. Okay, so that was dealing with loading our extensions. The other thing is that how do we make sure that just the way the extension operates is fast enough. And for that, we have another open source library that the Auscode team has published, which is called Visual Studio Exploration Tests. Now, this is a really good way to make sure that your Visual Studio extension is performing well. Basically, I think there are two sources of uncertainty when you're talking about writing a Visual Studio extension. One source of uncertainty is what other Visual Studio extensions are installed. That might cause performance issues and it will also very likely might cause crashes that you didn't expect because you personally are not using the same other extensions than your users are. And when you get crashes, you'd want to look up the chapter we did before with some advice on how to do that, to deal with that. The other source of uncertainty and fear and doubt in a Visual Studio extension is that you never know what the code that people who are going to be using your Visual Studio extension actually looks like, right? People have a lot of really crazy code that's doing a lot of really weird and interesting stuff with a lot of very weird and interesting platforms. And you could not possibly, no matter how much emphasis you put on testing, you could not possibly create all those infinite amount of possibilities into your performance tests to make sure that your extension performs adequately in all possible scenarios. So luckily, there's actually a website on the internet, you might have heard about it, it's called GitHub and it has a lot of C Sharp code in it. And what we've done with the open source library we've released, the Visual Studio exploration test, which I will show you right now, is we've released a library that actually knows how to download open source C Sharp projects off of GitHub and then run some tests on them and make sure that their performance or actually make sure whatever it is you want to do with them. Wait, say again? Example, let's go to, we have apps repository. Right here, I'm just creating this with some code. You can also actually create a JSON file and just serialize it, which might be a bit more convenient. So this is an example of an open source project that's on GitHub, it's IL Spy, which is a really neat WPF-based ILD compiler, just for like reflector. So we're giving the Visual Studio integration tests framework the clone URL for that GitHub repository. We're giving it a specific commit hash because we probably want the same, the tests to run against the same exact version of the source code each time we run it. And we give it to the path for the particular SLN file. So this is bringing down code that I'm going to test. Your extension on. My extension with because what this is doing is this is giving me a broad range of other people's code, a code that people might be running. And so it gives me a way of testing the extension in a variety of environments, the environments being projects that people might actually be using in real life. Absolutely. Got it, okay. Perfect. Now. And you chose these because they have a breadth of things that people are doing. Yeah, I mean, right now in this example, I just show you IL Spy, but what we do in the OS code team, we try to take up the biggest variety that we have. So we have WPF apps and windforms apps and console apps and dotnet core and net sender two and all these different types of technologies so that we cover them as many scenarios as we possibly can. All right, cool. Okay, and what you would want to do with this, so there's several things you could do. You could do the first thing is performance testing. I'm going to show you a sample of how we do performance testing in OS code. So OS code being a Visual Studio extension that has to do with debugging. One of the things we're most concerned of is F10 performance. We want to make sure that when you hit F10, we don't lag or slow you down in any way. And we're even more crazy about that than you'd think because if you hit F10 really fast, then OS code needs to know this is a fast step. So we're going to cancel whatever and just let you get on with your life. And if you're stepping through slower, then we need to make sure that we do get all those visualizations up, but we don't create any slowdown or make you feel like your UI thread is stalling. So what we have here is we're again using the host type attribute. And if you don't know what it is, you probably want to check out the previous video in this series, but basically what it means is that when we run this test, it loads up an instance of Visual Studio and there's tests run within that particular instance. And this test property is which Visual Studio Hive this is going to use. So you can actually have your Visual Studio Hive is basically sort of like a copy of Visual Studio. So this is one of the first things you learn as a Visual Studio extension offer is that when you hit F5, the very first time you build a Visual Studio extension, that extension isn't loaded into the same Visual Studio that you're using to build it because that would be a really- It's a test instance. Yeah, exactly. It's called the Visual Studio Experimental Instance. Experimental Instance. Exactly. It's sort of like a shallow copy of your Visual Studio. It has its own configurations. It has its own extensions that are installed on it and so on and so forth. Now, what we're doing here is we want to test that run this performance test in different environments. So first off, just to get a baseline, we're running it on Hive that has no Auscode installed because we want to know what we're comparing ourselves against, right? Okay. There's some sort of baseline. And this is a test that actually tests Auscode. So it's run on a Visual Studio Hive that has Auscode installed on it. Do you define that Hive ahead of time? So yeah, so you can basically when we were, okay, how do we start up a Hive? If I do dev-env slash root suffix no Auscode, it will create a new Visual Studio. If I create a new Visual Studio, if I call this, I can give it whatever name I want, basically. I can call it Auscode installed. I can call it Auscode read resharper or some other extension that I want to make sure I'm performing well with. And I just, the new instance of Visual Studio comes up, I load whatever extensions I want on it or whatever considerations. You do that part manually. Yes. Got it. Though you can automate that as well and we might contribute some more things to show you how to do that as well. But now I'm saying run test and I've got these different test aspects on it. So what's a test aspect? Test aspect is basically just something that you can implement to do whatever you want to do. So your test doesn't necessarily have to do with debugging. You can use the debugger or not. You can do something when the test started, when the test finished, et cetera, et cetera. You can have a test that steps over code but this is an Auscode thing. You can just ignore that. And what you can do, so for Auscode what we do is we just step over a bunch of code. And another aspect that we have is the dot trace aspect. So what we do, so this is in our instance is a performance test. So we have an aspect that we wrote that's not part of the open source general solution which just takes down how much time it took to step over each line of code and make sure it's not above the baseline, right? Above how much it would take to step over that line of code if you don't have Auscode installed. If we do find out that we have a performance problem, we have another aspect here which is the dot trace aspect. So I'm personally using the dot trace dot net performance profiler but you could use this with any other performance profiler and we're actually accepting pool requests and profilers as well. But the idea is that you can run your performance test with the dot net profiler installed so that if your build server is running your performance test and figures out, hey, there's a regression, something's going too slow. You can actually have that profiler snapshot as an artifact from the build server. That's what we do on our team and you can actually immediately look into the performance profiler and see what the problem is. So that's how we roll. Whenever somebody commits a new chicken into the Auscode repository, we have a VM on Azure that loads up, runs all of our performance tests and if something's going too slow we just immediately get a profiler snapshot and we can see exactly where the problem was. The other thing you could do and then we do with this idea of Visual Studio Exploration tests, tests that grab open source code off of GitHub and test against it is just Roslin-based stuff. For example, in Auscode we have a lot of Roslin refactorings or code fixes, stuff that manipulates code with Roslin. For example, we have the new link debugging functionality in Auscode which needs to take a link query and does basically open brain surgery, a bunch of syntax rewriting on it so that we can give you a really good visualizations on top of that link query. But we couldn't possibly foresee all the different crazy ways people could possibly write link queries. So what you could do is you could actually do a test for all your rewriters against all the different source code that you have because one of the nice properties about refactoring, if you're doing a refactoring with Roslin or a code fix, you probably have basic requirements is that the code should compile both before and after you apply your refactoring. And that's pretty hard to get right when you look at all, it's really hard to consider all the different edge cases in all of people's code. So another thing that we do with this framework is we basically take all of our Roslin refactorings and code fixes, we apply them wherever they can possibly be applicable in the open source code that we download off of GitHub and we run it on each and every single one and make sure that the code still compiles after applying those code fixes, okay, cool. So we've talked about making extensions performant but now let's talk about the real nitty gritty which is how do we make our extensions memory efficient? That's extremely, extremely important. Again, we have a lot of different extensions on that same garbage collector, all having a good time but all creating a lot of work for that garbage collector. So my first and probably most important tip about dealing with memory in Visual Studio Extensions is that you wanna create, you wanna keep your diagnostics tool window open. So whenever I- So I'll add a beginner tip to that is understand what's in there. Absolutely, so let's talk. That, I remember the first time I saw that window, it came up and said, oh my gosh, there's a lot of interesting stuff in there, that's in my way, close it. You know, whenever I do a conference talk about the Visual Studio Debugger, I've been going around the world, I just came back from Germany where I talked a lot about the Visual Studio Debugger. I started by saying, have you guys seen this window, this diagnostics tool window, the first time you started the debugging in Visual Studio 2015, everybody in the room says yes. How many of you understand what's in it? Error. Crickets. Crickets. How many of you kept it open after the first time you saw it? And have you ever seen it again since nobody in the room responds? And that's really- Which is painful for the product team to hear but that is reality. And it's such, it's painful for me to hear also because in my opinion, this is one of the best things that has been added into Visual Studio, extremely, extremely powerful. You just need to know what the power it gives you. And as a Visual Studio extension offer, this is a must have. You cannot or you should definitely not attempt to write a Visual Studio extension if you're not aware of the two most important patterns in memory problems, which are memory leak and GC pressure. So this is a picture of the process, let's zoom in on the process memory part of the .NET Diagnostics Tools window. So there are two major problems that we might have in a Visual Studio extension and it's really easy to accidentally put yourself in a situation where you're having these issues. One is a memory leak. So memory leak is pretty simple to understand. The process use, the memory usage of our process goes up and up and up and up. And we get occasional GC, so the yellow here is GC time. So the GC occasionally has to come into the picture progressively more and more as the memory usage goes up and it tries to solve the problem, but it can't because we are keeping hold of references to objects that we don't need anymore. The other equally important problem in working with Visual Studio extensions is we often create GC pressure, way, way too much GC pressure. What GC pressure means is that the memory usage of our application is more or less the same, right? We don't have a memory leak. We are letting go of those objects, but we're just doing too much work. We're creating too many allocations. Our code is too allocaty. I really love using that word. Like in a code review, I'd sometimes say that's code's a bit too allocaty for my taste. It just creates too many objects and if we're running inside of a tight loop, we can't afford to keep the GC so busy. A good ballpark number is that if you look at an execution of a .NET application, if the percentage of the amount of time spent in GC is more than 10 percent, you should say, hold on a minute, something's fishy. I need to go and look at this. Where do you see that? So you don't actually see that inside of the actual number, but you can just train your eyes that when you see this pattern, you see the yellow, too much yellow. That means that there's a problem. You can actually look up that number inside of the Perfmon tool that comes with Windows if you want. Okay. So dealing with GC pressure. So let's talk about both issues separately. How do we deal with memory leaks? How do we deal with GC pressure inside of our Visual Studio Extensions? Let's start with GC pressure because GC pressure is the one we're most prone to cause in Visual Studio Extensions because we're dealing with syntax trees, we're dealing with code. There's usually a tons of strings that represent method names and namespaces and classes and all that stuff. So first tip, always be measuring. How much are we allocating? What is that percentage of the amount of time spent in GC? Open up Perfmon, look at it. Modern .NET memory tools will tell you if you have a problem where you're keeping many instances of the same string in memory so you can use a string intern pool. So a string intern pool is a really good, a string intern pool is basically the idea that if we have many instances of the same string, strings are immutable in .NET. We don't need a million copies of the word system or MS Core Lib inside of our .NET application on our Visual Studio. The other thing you could do is you could use the Visual Studio extension that will actually highlight in the code, code that is to allocate it. There's different patterns of using C-sharp that will cause all these subtle allocations that you might not realize. For example, whenever we use a lambda expression and we capture a variable which is a local variable, it actually has to capture that variable and under the scenes, the C-sharp compiler is creating something that allocates an instance of a class. So if you install the Roslyn heap allocation analyzers, it basically puts a little green marker wherever there's an allocation in your code and you can actually get a better feel for is your code to allocate it or not. That code is bad? That code isn't bad per se, but it's allocating because you're using the variable word inside of this lambda expression, and that means that every single time we enter this for each loop, we're allocating something on the garbage collector heap. We are creating memory pressure. So if this is a regular piece of code that just runs every 20 minutes for one second, then probably not nothing to worry about. But if this is a tight loop which is going over all of the code in the user's solution, for example, you want to be aware of these things. Okay. So before we move on, what's the potential fix for that? So potential fix for that is just, don't use a lambda expression here. You don't need to. That's the easiest one. Okay. So let's talk about memory leaks. The problem with memory leaks is there are a lot like hybrid pressure. They sort of sneak up on you, they build up, they build up, and eventually they just kill you. But you don't notice that it's a problem until it's really too late in a sense, because you're usually writing your Visual Studio extension and you're opening Visual Studio and closing Visual Studio, 20 times a day as a Visual Studio extension offer. But your users are not. Your users probably have the same instance of Visual Studio day in, day out. They probably leave it open when they go home. They come back the next day, they keep working on it, hoping that we haven't crashed it leading back again to the first chapter of our series here. So it's extremely important that we have tests for memory leaks. Right? Because memory leaks can creep in in very subtle ways without us even realizing, and we can introduce them very easily. It's really hard to detect. So the best way to detect it is by just having our continuous integration pipeline tell us when we have had introduced a new memory leak. There's two different ways we can do that. The first one and a great example for this is up on the VS Vim GitHub repository. Again, VS Vim is one of the best resources you have at your disposal as a Visual Studio extension offer. It has an example for how to do pretty much everything you'd want to do in a Visual Studio example. It's a great reference. One of the thing they do there is testing using unit tests for a memory leak. What do we do here? We just create a new instance of an object with a re-reference to it, to an object that we want to make sure is not leaky. We run whatever test we have. We run the garbage collector and just we make sure that object isn't needed anymore once we've finished whatever it is we're doing. So I'd call that the poor man's approach. It's not really the poor man's approach, it's the version that doesn't cost any money. There's another option that usually does cost money because this is usually an API that your .NET performance profiler will give you. So if you're using Redgate ANS profiler or JetBrains.trace, or if you're using the Visual Studio profiler, most of these profilers actually also come with an SDK. What that SDK will let you do, it will let you start a profiling session from within your code. So you can start the test and this is especially useful when you're combining it with the Auscode Visual Studio integration test library that we've seen in the previous episode, where what we do is we create a test that goes for an entire user journey of using some feature, and then we go back to some idle state. So every Visual Studio extension has an idle state. So that might mean that Visual Studio is open, but all of the text editors are closed. Or for Auscode in idle state means that, because we're a debugging extension, an idle state means that the user stopped the debugger. At that point, we know that Auscode shouldn't have any access objects still in memory. Then you can do a memory assertion. So a memory assertion can be something like, make sure there are currently no instances of a given type in memory anywhere, or there are no more than five, or there are no more instances since the last snapshot that I took. This basically bakes in protection against memory leaks into your testing infrastructure. Another very cool thing you get out of that is that, if we did accidentally introduce a memory leak, you can have your continuous integration server create a snapshot as an artifact of that test. So that's a great user experience. Your team will thank you for it, and your team will also be appreciative of how important it is to have tests around if they see that every single time they introduced a new memory leak, they get a notification from the build server that the build failed, and they just click on a file that opens up the memory snapshot in the memory profiler and they see exactly how and where they introduced a memory leak into the system, into the extension. That's it. We're finished. The last thing I want to tell you is just to sum up here, is that we talked about a lot of different patterns. Somehow, everything led back to the concept of testability and having a good true continuous integration and continuous delivery pipeline for your Visual Studio extension. I want to finish off with this quote from Martin Fowler, which I absolutely love, which is, he defines this idea of continuous delivery as something where a business sponsor could request that the current development version of the software be deployed to production at a moment's notice, and nobody would bat an eyelid, let alone panic, I'd say, let alone burn the building. That's something like we're not quite there yet. I don't just deploy a new version of AusCode without thinking about it willy-nilly, but I do feel that I have a lot of trust nowadays that if I do that, that our tests, our performance tests, our memory leak tests, give me that assurance that the software is working correctly, is performance, is stable, is memory efficient, and all that. As an extension builder, you would be okay getting to a point where you're delivering and updating on a fairly regular cadence, like Visual Studio, and what do we have, three-week sprints or something? It seems like every three weeks is a new version there. There's 15.3, then there's 3.1, 3.2, 3.3, I don't know what we're up to now, I think we're up to 3.5 as we're filming this. Pretty soon there'll be a four. So every few weeks, which I think is a great thing because what's being delivered is either new features or bug fixes, both of which I'm kind of happy with. So as an extension developer, are you kind of moving towards that same cadence? Absolutely, and as an extension offer, you actually get a really great thing for free. So if you're deploying your extension as a Vistix file not as an MSI, Visual Studio now actually works kind of like Google Chrome or like apps do on your iPhone or Android phone in that they update themselves without you even noticing. So most people aren't even aware of this, but if you're using a Vistix deployment extension, which most extensions are deployed as Vistix files, if you close down Visual Studio and then come back tomorrow, open it again, you might actually have a newer version of the Visual Studio extension installed because Visual Studio took care of that for you behind the scenes. So absolutely. Cool. All right. So two episodes on writing high quality extensions, lots of great tips and tricks, not only for extension builders, but even for folks that aren't. Absolutely. So we got a lot out of that, enjoyed it and we will see you next time on Visual Studio Toolbox.