 Good morning and welcome to another episode off the Visual Studio remote office hours. I'm your host, Mads Christensen, and today I'm very excited because we're doing a very geeky deep dive into the inner workings of Visual Studio. And this is just something for anyone that's been using Visual Studio for a long time, you might have been wondering, well, how does Visual Studio actually work? What happens behind the scenes under the hood? And we're going to find out today because. We are in the presence of the great Andrew Arnett, who is quite the institution on the Visual Studio engineering team and Andrew, welcome aboard. I'll let you introduce yourself. Oh, thanks for that very actulating introduction. Yes, my name is Andrew Arnett. I'm on the Visual Studio platform team, so I do less of what you can see as you're using Visual Studio and a lot more of what supports other people who develop what you can see. And I've been with Visual Studio for 12 years. And before that, I was on the dotnet compact framework team for a couple of years, so I've always been behind the scenes. I do business logic better than I do UI. All right, yeah, I think there's a lot of people that can recognize that business logic over UI. So let's just get straight to it. And some years ago, I was made aware of this thing that I don't quite understand. And I think a lot of people might find this interesting. Something happens when we start Visual Studio. So just to be clear, Visual Studio, when we start Visual Studio, we start an executable called devnfxc.exe. And it's something about that it's a native process that boosts the dotnet framework inside of it or something like that, very special. And I never quite understood how this all worked. And so I was hoping, Andrew, that you could talk about how, like what actually happens when we start devnfxc? Sure. So I'll even start by plugging that. Your assumption is that we start devnf. A lot of people don't know because devnf is what they install, but those are what we call stubs. And we actually have several of them. Back a few years ago, we used to have several smaller profiles. I forgot what we called them. Different SKUs of Visual Studio, like WebExpress, C Sharp Express. These didn't start with devnfxc. There was VWDExpress. And even now, although we've gotten rid of the Express ones, last I heard anyway, there still are other ones. Depending on what you happen to install. And if you actually look, devnfxc is quite small, relative to the size of Visual Studio. So actually, most of the meat, what we do, that stub does a few very specific things that only devnfxc should have, but mostly it hands it off to another file that's in that same directory called msnvdll. And that's considerably larger. The PDB is, if we install, we don't install it, but the PDB is hundreds of megabytes. It's quite the code base. In fact, we link it from, it's so large, we don't wanna, when we are working in it, we don't want to compile and link the whole thing every single time. So we actually, you compile it into half a dozen libs and then we link the libs together to just cut down on the incremental build time. So now you asked about, so the process, we parse the command line obviously, and that's all just regular C++ code. And a lot of that has been around for over a decade. There's, I mean, the Visual Studio or its predecessors have been around for a long, long time. Now the CLR, you may have heard of, if you've done any kind of managed native mixed mode processes, there's, the CLR has a feature called ItJustWorks, IJWs, that's actually the name, where you can load a native module that has managed code in it and it'll just magically load the CLR. And we probably have some of those too, but the way we load the CLR is interesting. So in case I haven't made this clear perhaps, Visual Studio is not a managed application. It's a native application. And the difference, the significant difference is if you compile your own WPF application, you can open that up in IELTSpy or .NET Reflector and actually look at all the code and it's all C sharp code or VB code or whatever you came from. But when you launch it, there's a little header in that XE file that tells Windows, this is a .NET framework application. And so it goes and finds .NET framework and lets it basically grok the whole image that was your application. And it turns the JIT runs and it turns it into machine code and then it loads it. But the key point being .NET framework controls the process from the very get-go. With Visual Studio, like any other native application, there's no .NET framework initially. The process loads, it does whatever it wants in C++. And then there's a few ways that a native process can host, as we call it, the runtime. And it could be the .NET framework, it could be .NET Core. Both of these runtimes allow hosting inside of a native application, which is incredibly useful when you have an existing native application and you wanna actually start leveraging some of the abilities that .NET framework or .NET standard bring to you. So if anybody can do this and they're proper supported and actually surprisingly simple hosting APIs so that you can get your first managed DLL loaded. At that point, it's all very C-style interop. Managed has its very, very rich APIs with classes and everything. And on the native code, we have classes, but at the interop layer, it's very much like Win32 APIs. The hosting API does not assume that you're even in C++. Any C program can access its APIs. And so these hosting APIs, you call them and you can say, hey, load this managed DLL. And that first call will cause the runtime to load into your process, set up everything. And it'll load that one DLL and hand you back a handle, a native pointer thing to whatever the CLR is using to track that assembly. And then you can say, hey, I want you to execute this static method defined on this fully qualified type. And you can just execute that. Now, that static method could return anything. It could return an integer so that your native code can then go and do whatever it is gonna do with it. Or it could be that it returns something rich, like an object, in which case you're going to get a CLR handle to that managed object. And so we have lots of this kind of code in our native layer. But there's a totally, and I'm explaining it this way first because this is probably gonna sound more familiar and probably more intuitive. But there's a whole different way of hosting this CLR. That's actually what Visual Studio uses. You can, if you're familiar with COM and there are COM-activatable objects where these objects are actually registered with a system or in the process so that when you ask COM activate this thing, COM has an idea how to activate it. Do I load the DLL? What's the entry point to that DLL? Where in the world can I find this type? COM has all sorts of rules for discovering all of these things. And it turns out, I mean, COM was designed to allow any sort of language to expose COM objects. And if you've been a Visual Studio extensibility developer, you know that Visual Studio is all about COM. We're also all about .NET, but at the root of it was COM. Before there was .NET, Visual Studio existed and there were, or its predecessors have. And we used COM all over the place because it allowed us to componentize different pieces and take in extensions and activate them. And it was very, very flexible. Well, the way we host a CLR in Visual Studio, we first call a couple of the hosting APIs to say, hey, when and if you ever load the CLR, we want you to let us know. And by the way, we want to tweak, the hosting APIs offer you ways to take special control. So we control the app domain manager, which generally speaking, a managed app can do that through the config file that a native app that's hosting the CLR has to do it programmatically. So we say, hey, when and if you load the CLR, just let us know because we want to customize some things. But we don't actually say load the CLR. Then as Visual Studio goes, at its very, very core, we have what we call packages, which is basically a co-creatable COM object. So Andrew, just before we get there, so just kind of to summarize here, I think a couple of things were super interesting here. First of all, Visual Studio, which is maybe the biggest WPF application in the world, it's actually native under, when you peel away the layers, which is kind of an interesting thing. But you mentioned Win32 and COM is, so I assume that Visual Studio has these very strong dependencies on those technologies. And is that the main reason Visual Studio is Windows only? Like we can't split Win32 and COM out of Visual Studio very easily at all, or maybe at all. So moving Visual Studio off of Windows isn't something that I've given a great deal of thought to, but yes, Microsoft only supports COM on Windows. .NET Core supports COM, but only on Windows. I don't know if Wine on Linux would give you some sort of COM abilities, but yes, Visual Studio does have some, a lot of deep Windows ties, we call a lot of Win32 APIs. Okay. Good question. So basically, so in the process of VS Start, we have a list of packages that have to be loaded in a particular order because they're part of our bootstrapping. And one of the packages, it's not the first one, but somewhere in that list is a package that, is a COM, co-creatable COM object, that happens to be implemented in Managed Code. And I say it happens too, but we know which one it is. But by just asking COM to co-create this COM object, COM looks at the registries as O, to activate this, I need to go load MS Core EE.DLL and ask it to activate it. Well, that DLL is the stub that activates the .NET framework in your process. So the .NET framework then loads, it calls back our little callback that we asked for, and then .NET will create the first package that is in Managed Code. And then where I said before that these other hosting APIs, all you can do is very, very primitive call the static function inside of a .NET type, it's COM. So now we don't just, we're not restricted to static functions, now we can, we have an interface, we define all of our COM interfaces in it'll, the interface definitional language, I think, or yeah, I think what it stands for. But we co-create it, we cast to this it'll based COM interface, and then we can call whatever we want on it, and this is IVS package is the interface. And now we've got .NET running, and it turns out that at that point, that .NET framework DLL that was loaded can depend on anything else, they can go and activate other things. Basically, .NET is now a full fledged part of your process, and it can do whatever it wants without restriction that you might assume might, well, it's native, so it can't do anything less native code says, so no, no, it's just .NET, you can do whatever you want in it. The native layer has a little bit of say in overriding certain things, but even that's fairly restrictive in what we have influence over. Okay. All right, so that's interesting, so Visual Studio doesn't on its own create the .NET process, it's basically just by calling it that, like loading that package that it just happens automatically, like Windows makes that magic work basically. Is that, do I understand that right? Okay. Yep. If you have multiple instances of Visual Studio open, maybe even multiple versions, Visual Studio 2017 and 2019, and you have them both open, do they share anything, like do they both have to instantiate the .NET framework within their individual processes, or is there some sort of sharing of something across? So a long time ago, back in, I'm not sure what year we finally, I think Dev50, so Visual Studio 2017, I think was the first one that got away from the GAC, is that right, year ever? Yeah, I think it was. So before Visual Studio 2017, a good chunk of our managed code was shared by virtue of Visual Studio just to always installed almost everything in the GAC. And that is not something we encourage customers to do, and we don't even do it ourselves now. At the time we did it because as you mentioned, we are a very, very large WPF application, and there were some performance optimizations that the .NET framework offered, but were only accessible if you were in the GAC, is my understanding of it. Also, and it wasn't even, when I say performance, I don't mean necessarily speed. When we look at performance in Visual Studio, we're looking at speed, we're looking at memory pressure, and it turned out that at least back in the day, with certain settings, so I don't remember that, I don't know the very core details, but if you gacked and engend a DLL, which we were doing that a lot, or if you engend a DLL, unless it was also in the GAC, the CLR would load both images into memory. Now for most processes, this isn't a big deal, but Visual Studio is a 32-bit process, and we have a lot of binaries. We don't want to fill most of that 32-bit process space with a bunch of not only DLLs, but duplicate DLLs, because we want that memory available for users, for the data, for the projects and the solutions that they open, and for the analysis and other value ads that we have. So we would do things to share across, by installing in the GAC, we would share DLLs on disk, but that was very painful, and we really wanted to be better examples to our customers as well. And mostly, I think, we wanted to improve the acquisition story of Visual Studio. Those of you who've been around for long enough remember, Visual Studio used to take a very long time to install, and a very long time to update. And now we've got this wonderful installer that will install Visual Studio in minutes, and update in a minute or less. It's really, really great, and all of that was possible only because we got away from sharing almost all of our files. We still have a few MSIs that we have to install. SDKs tend to be among them, like if we have to upgrade .NET framework or .NET SDK on your box, that's a global install. So Visual Studio will share that with other copies and versions of Visual Studio, but almost everything is in that one Visual Studio directory. So if you load Visual Studio twice, or whether or not it's the same version, I mean, if it's a different version or a different installation, you're gonna read from disk at different locations. But once it's in memory, I believe if it's the same place on disk, Windows as a kernel will do optimizations to make sure it only loaded the deal once and it maps it into both processes. But as far as the .NET framework is concerned, anything that it had to jit, it jits in both processes individually, regardless of whether they were the same file on disk. We try to minimize our jit cost. But yeah, that answers your question. Yeah, it does, it does. So that's interesting. I remember when we got out of the GAC, I think, and correct me if I'm wrong, that was the same time we got out of the Windows registry. It used to be that Visual Studio had so many registry keys that, it would actually make searching through the registry, I think, slower, because it was just a massive amount of stuff we put into the registry. All the settings, all user settings and global settings and everything was stored in there. And we moved away from that around the same time, and we did something else. What actually happened? So yeah, I can't remember for sure to line exactly, but it was at least that timeframe. For a while, Visual Studio used to be a singleton, like you could not install two of them within a major version if I recall correctly. And so we had a very well-known key in the registry. And then we allowed side-by-side installs. And before we left the registry, we got clever and allowed each installation path of Visual Studio to have its own key in the registry. And it was some bizarre random set of like 12 characters or something that we would add to the registry key, which made it hard to know which installation went to which key in the registry. So I think that predated, we were still in the registry and that predated getting Visual Studio out of the global space. But yes, as part of allowing you to install multiple minor versions side-by-side or even multiple of the same versions side-by-side, we needed to get out of the registry. And we very much wanted to, we were, as you say, we were a humongous blow to the registry. So yeah, now we are in a private, I forgot what they call it, some registry private bin file that's in local app data someplace. And it's a few megabytes large. I don't remember how large it is, but it's still in a directory with that same weird hashy looking sequence of characters on it. But there's this really, really arcane technology that I don't recommend anybody use. But in Visual Studio, it was critical to our success as a humongous diverse application because we had been in the registry so long, we had untold amounts of code, including customer extensions that would run inside of our process and expect the registry to be there and for Visual Studio and the extensions settings to be in the registry. We couldn't just leave the registry without breaking everything. And we try really, really, really hard, even across major versions to keep as much code of our own and as extensions just working. So we use, it's called the detours. It's a technology that will allow you to redirect Win32 API calls to your code. And so we literally, if you had code, if you'd written a native library or a managed one, that literally calls, indirectly or directly calls Win32 registry APIs and you ran it in your own process, it would access the registry. But if that same DLL, not even a recompile, ran inside the DevNXE process, depending on which key it was trying to access, it would not actually access the registry. It would be diverted to our private bin file or any number of other things that we might redirect things to now that we've moved out of the registry. Yeah, okay, that's cool. So that's why all my extensions didn't break because it's almost like, you know, you mentioned IVS package interface. So all like most extensions use a package class that they derive from for there as an entry point for their extensions. And off of that base class, we actually have like user registry route like as a property which returns a registry key. So it's like the whole registry thing is like so baked into the API of Visual Studio that everyone or many people just use it. And so breaking it was a non-starter, I take it. Yeah. So you didn't, so I take it that you knew that you could detour before you went ahead and did more investigation into this. Yeah, so I personally didn't, but yes, we've actually used the detouring technology for a long time in Visual Studio. This was applying it to the registry. So it was a trick we already had in the bag. Yeah, okay. So in the beginning, you were talking about like how we have managed, and sorry, we have a lot of native code in Visual Studio. And you know, once you get into the Visual Studio APIs, if you write extensions, you'll notice that there are some concepts that seem non, like that seem a little bit native. For me as a .NET framework developer, it seems a little non .NET, let's say. And I think that's because some of it is just old, right? It's been, and there's nothing wrong with it. It's been working for maybe literally two, three decades. And so how old is like, what is the oldest part of Visual Studio that you know of? That's a good question. So I'm, there are people who have been in the org longer and would be able to give you a better answer to this, but Visual Studio .NET or Visual Studio 2002, which is the first version of Visual Studio that coincided, I think with .NET 1.0 was not the first version of Visual Studio or its predecessor. So Visual Studio, and before that, I think, and I'm sorry if I give anything accurate. This was a long time ago, even though I was a Visual Studio slash, whatever it was called customer for much longer than I worked at Microsoft. But I know there are under time, VB6 and the Visual, whatever we call it, Visual C plus plus developer, the Visual C plus plus IDE, those two merged and Visual web developer, I think is what we called it way back in the day, you know, the classic ASP, what? Wasn't there an interdev? Yes, yes, that's what it was called, way a long time ago. So all of those IDEs had- This is back in the 90s, right? Now we're in the mid 90s, something like that. Yeah, yeah. And so some of them had language support that we wanted. One of them at least provided a pretty good basis for a shell that would encompass everything. And so we merged all these small, very focused products that were focused on one language or one particular workload as we call them nowadays. We found some good basis for a shell and then we just started bringing things in. And that's probably when we, probably I'm guessing when we came up with packages, or maybe the package concept came from the base shell that we took from one of the other applications that we already had. But yeah, it was a very, very long time ago. So that code still exists. Like we still have a bunch of that from back in, you know, interdev times that still lives in Visual Studio today. Probably, yeah. I mean, Visual Studio is way too big of a product for us to rewrite it for rewriting's sake. Most people at Visual Studio would love if Visual Studio was a pure managed application. And that might be a question that our customers have. Why don't we just, why is Visual Studio a mixed mode? Why don't you just make everything managed and you commented on the APIs and how the age shows? Most of us would love developing an all managed application. So it's not a bad idea, it's just so big. And we want to deliver added value to our customers with every release. And our releases are happening so frequently now. We just don't have time to rewrite native code that works. So we will continue to be a mixed mode application for the foreseeable future. And as you say, some of these APIs that make up part of our VS SDK are based on interop assemblies that come from old calm interfaces that still do the job. And so in some cases where they do the job, I mean, if they're just obsolete and they don't do the job very well anymore then we'll deprecate them and offer new ones. But sometimes even if they're doing the job but they're particularly arcane, we will offer managed wrappers that make things much easier to use. But you'll deal with IVS solution and that's a native interface that's been around for a very, very long time. And so that's why you get these H results from a lot of our methods. A lot of people will call these inter returning methods and not check the result and have no idea that unlike a managed method that would throw at you, it's not necessarily gonna throw at you. It's gonna return an integer until, and this is where you get into the weirdness of the mixed mode application. As long as it is native code that's implementing that calm interface and you can't tell, that's the whole point of calm. You don't know what's on either side of this interface. You're not supposed to have to know. And that has by the way empowered us to replace. Over time we have replaced a bunch of native code with managed equivalents. The error list used to be native and it didn't scale very well. But in our age of Roslin analyzers and very, very large customer solutions, you can have hundreds of thousands of entries in that error list. The native code never would have handled that. So that was an opportunity where we could fix the native code or we could start from scratch and manage code and make it uber, uber scalable. And that's what we did. So... And a lot of it has to do with like so scalability and things we can't easily make things asynchronous, right? The error list is now async. It didn't used to be async. We now did a brand new implementation of find and files which I think was also one of those like super old native things that we simply couldn't maintain it. We couldn't add new features to it because there was just, it was too, it was basically too old, I think. And so that's when we take the opportunity to say, okay, we have enough features we wanna add so that we can add customer value. Now is a good time to rewrite into managed code. Yep, .NET, dev 10, so Visual Studio 2010 was supposed to be originally, that's actually when I joined the team. It was when we were early, early in the dev 10 cycle. We were actually scoped to I think rewrite almost the whole application in managed code. And that's when we learned, okay, let's not do that. But we did rewrite some very significant pieces, like that's when the editor became managed. But we decided to have even the managed implementation honor the old native interfaces so that we wouldn't break extensions. And that was, they said about as much work or more in retrospect as it would have taken for us to fix Visual Studio and delete the old native interfaces. And we would have unhinged ourselves and been able to get even better performance. But we really didn't wanna break extra extensions either. And of course the editor is a very, very popular vocal point for extensions because that's just where developers live and breathe all day every day. Yeah, right. So maybe let's talk about Visual Studio 2010 because that was a big change for us, right? We went from whatever native UI mechanism we had at the time to WPF. So WPF didn't exist back in the 90s and early 2000s when sort of modern day Visual Studio started. But then in 2010, we did that whole overhaul, we did the editors, it's also now all of a sudden WPF based, I think, right? And, but there's a whole story to like why did we stop? Why didn't we do a full transition from native to managed at the time? Like I remember there was like some talk about, I wasn't there, so I don't know, but I heard sort of rumors that there were efforts, like editor was one area, there was a bunch of areas and at some point it was decided, let's we stick with the editor and maybe the UI layer and that was kind of it. Is that just a rumor or do you know something? So that's fair. We did change the main windows, WPF, toolbars or WPF, many of our dialogues are WPF and of course the editor is. But for the same reason, you can't just take all of your Win32 or WPF applications and suddenly they're Windows, XAML and on the store. I mean, it's expensive and if you're selling a product, especially, and if you're not, you're doing it for free and so it's even harder, right? But most of us are developing software as part of our business and you gotta put the value where the customers will actually see that value. And so there's a lot of UI in Visual Studio and now with Windows Store apps, you've got these XAML islands that they call it and so you can actually bring your old, whether it's GDI32 or WinForms, WPF, whatever it is, now we can mix and match different UI technologies, which is great and that meets customers where we were a decade ago when we were writing Dev10, we wrote a similar technology that just allowed us to weld together GDI32, MFC and WPF and WinForms. We needed to be able to say, well, we can't change this whole dialogue, but there's a new page in this, like the tools options dialogue, right? That might still be native, that still might be a native dialogue, I don't know, but certainly a lot of the pages in that tools options dialogue have WPF controls in them. So internally we call it gel, it's just a collection of patterns and interfaces and the supporting code to allow WPF to be hosted inside of a arbitrary other window, whether that's WinForms or GDI32, I suppose, or MFC, whatever the windowing technology is and we can go either direction. WPF can host the old thing, the old thing can host WPF and this is apparent, sort of like Windows has come a similar journey, right? You get mostly these new modern UI, but if you drill in deep enough in some of the settings, suddenly you find yourself in the old Windows 95 era control panel. In Visual Studio, depending on which UI you're in, you'll say, hey, this is a gray dialogue and this is a black dialogue, it doesn't follow my theme or it does. You can kind of see which era a dialogue in Visual Studio came from. And once in a while, again, if we're rewriting the dialogue, we might, you know, if we're using WPF, we might theme it and then you'll, oh, this looks fresh and new and we like it when things look consistent and fresh and new, but again, we would just, at the end of the day, a little bit of polish on the UI that makes it look consistent is generally speaking, not as valued to our customers as adding features and fixing stability and fixing performance. Right, so we wouldn't necessarily go in just to add theming to a dialogue, a certain type of dialogue that maybe doesn't have that much usage or is more of an edge case type of scenario or something like that. But once we get to work on it anyway, we will then do the work there. So Eric has a comment. He says, it is extremely impressive that you're able to deliver many new features giving the legacy core code base. Yeah, I'll agree with him. It is, and I'm not an engineer, so I don't know what it is, but it sounds extremely impressive to me. Yeah. Good job. Some of our, I mean, the legacy code base shows through a little bit. It gives us some anxiety. We have a method in native code that opens the solution. So file open project, when you load a solution, that's still native code that loads it. I mean, granted, it calls a lot of managed code to get the work done, but there is a method that's, I think, 5,000 lines long, one method. And it's native code. It's a beast, as you might guess. No one dares touch it, except I need to add a feature, so I'm gonna add it and make it 6,000 lines long, but I'd rather that than refactor it into smaller methods and follow all best practices, right? Right. So it is sometimes painful to make Visual Studio do what it needs to do, and sometimes that holds us back, but I agree, and I appreciate the recognition there. It is impressive what we can do without, you know, considering how much legacy code is there. It's kind of funny. You mentioned the 5,000 line method. I don't, I think everyone, every team in the world has got a method like that. I've never worked in a company anywhere where we didn't have one. I think we had, well, this was a class. We had a class that was 16,000 lines with only static methods in it. Like, this is before Microsoft, old days, but I think it's a very common thing. So now people, you know that Microsoft, we do it too. It's not just you out there. Andrew, in 2010, I think something else happened. I might be wrong. It could be a different version, but we changed the whole menu system, right? The way you can customize the menus changed, or maybe we're talking even longer. No, I think you're right. It was based on Office 97 or something. Like there was, if you could go to a toolbar and you can do right-click and say customize, you're going to get a dialogue that I think comes from some version of Office, right? I don't know where it came from, and what you're describing is the more recent behavior. Yeah, with Dev 10, we lost some things. We rewrote a lot of it in WPF, but some of the, I remember when 2010 lost it because I felt like this is a sad loss. Now, I haven't heard anybody complain and I honestly don't miss it as much as I thought I would. It used to be in 2008, I think it was ALT. You could ALT drag menu commands and toolbar buttons. You could rearrange your toolbar, all you wanted just by ALT dragging. You didn't even have to go to a customized dialogue. Oh man, I want that, I want that now. Yeah, it was so slick. But as you can imagine, that took a lot of, that took an opportunity that probably doesn't come up very often for some developer or some team to say, hey, let's polish this otherwise very refined UI or this UI toolkit. Let's make toolbar, and I suspect it came from MFC because I think any MFC app could maybe do this. But someone decided, let's invest in this and really, really give this cool, nice shiny feature. And when we moved to WPF, now we gained a lot of things from moving to WPF. Whole lot of things that have even shown dividends lately with like high DPI monitors. That wasn't such a thing back in 2010, but WPF put us in a great position to have great support for high DPI, which we never would have had on the old system. But yeah, so we didn't recreate that particularly customized experience of changing the toolbar as the menus. So we still give it to you through that dialogue and maybe as you were saying, maybe we got that from Office, but it's not, it's not what it used to be. Okay, right. Okay, so yeah, the high DPI is kind of funny because we added support for high DPI in Visual Studio 2019. And it was problematic, right? Because as you were saying, there were islands around in Visual Studio that was not based on WPF. If you were WPF, the different teams owning the different dialogues now didn't have to do any work, but some had WinForms UI and some had native UI. And then you had all the third-party extensions out there. Some of them also had non-WPF UI and all of a sudden things started not working. And that was kind of a bit of a mess to do the right guidance. We put out a Nuke package that people could use when they hosted their app inside Visual Studio. Actually, they could use it with any app, I believe. Not just Visual Studio extensions, but that was like a big ordeal to move everyone over to that system. So that, I think that just says a little bit about another thing that we are faced with that maybe not that many other people are, is that if we change something in one area, it affects hundreds of engineers on the team that they now have to adapt to this thing that we added over here somewhere. And is that the reason why sometimes there's things that we just don't do because they're just too risky or they're too, they have too big of an impact? Like how do we balance that versus just regular feature development from a priority perspective? That's the million dollar question right there. I say million dollar, probably a multi-million dollar question. So we've got product planning. I love how we do it in the interest of the org. Product planning is a bi-directional process where marketing talks to upper management and they say, hey, the next version of it will still really should do this based on what customer demands are or what the market is doing, what our competitors are doing. And then from the grassroots efforts of individual devs and people managers also get to say, hey, well, the users of our individual feature really want us to fix these bugs or add this one feature. And so at the beginning of each cycle, we kind of blend these two and figure out what we can scope into it. And so when we're talking, as I understood your question, how do we decide what to redo? It's a combination of those discussions of what high level and low level observations and interviews with customers want. And sometimes customers will ask the wrong question. There's a science name for it, I think. But they'll say, hey, for, well, here's a classic example. Customers will say, we want 64-bit Visual Studio, which isn't so much of a request as a prescription to a fix. It's a solution to the problem, right? Yes, and often the solution that customers have in their mind is either more expensive or complicated, impossible, or not gonna solve the problem as well as they'd like or it's gonna cause, as you say, like issues for neighboring teams or other, it's gonna break all your extensions. You didn't think about that. And not to overly criticize 64-bit VS, that is something we've looked at many times and we've looked at recently as well. But as we zoom in with customers and say, well, why do you say that? We hear things like, well, my solution's big and I run out of memory. Okay, well, we can address that concern at least somewhat, if not significantly, doing other things that won't be as disruptive and that can allow more incremental progress to be shown. And so if you launched DevNBegsy today and you're watching the process tree with Process Explorer or you're just looking at task manager's process list, you'll notice Visual Studio launches a lot more than just DevNBegsy. We've actually got at least half a dozen other processes running and some of these processes are 64-bit. Roslyn, for example, is a humongous memory hog if you've got, and not a disagreement with Roslyn, they had a lot of value for their memory, but if you've got a large solution, it's a lot of memory that Roslyn has to take. But they've moved most of their memory consumption out of process and often in a 64-bit process. So we're actually breaking the boundaries of 32-bit address space restrictions and customers are able to load much larger solutions because we actually move a lot of this address space out of the process. And this also allows us to think, well, 64-bit, you never have to worry about it, you can avoid all that work. Well, yeah, but it costs you in other ways. 64-bit is a larger pointer size. And so what used to fit in two gigs now requires three gigs. So now your RAM doesn't go as far. And that's not as impressive now because everyone's got eight and 16 gigabyte laptops, but a lot of some of our customers do, but some of our customers are still using netbooks with two cores. So we wanna make sure that we're addressing all of our customers' requirements. And there was some neat innovation a year or so ago that the debugger team came out with. Customers were running out of memory, debugging, I don't know if it was Chrome, but some very, very large native application. If you debugged it, as soon as you stepped into it once, VS would bomb. I think it was Gears of War. It was some of those big AAA games or stuff like that. Right. And if I recall, it was because we were loading all the PDBs. And it was simple loading. Yeah. Now, if we had moved Visual Studio to a 64-bit process, it wouldn't have bombed. And but it would have swelled to, who knows, maybe we would have taken 12 gigabytes of your memory. And if you don't have 12 gigabytes of memory, that means we're paging the disk and it becomes a slower experience, which might not be that critical. But every little bit, it adds up and you end up with a sluggish VS. But because we're in that 32-bit constraint, the debugger team thought innovatively saying, hey, well, what if we rewrote how we read PDB so that you don't have to load the whole, every single PDB all in memory at once? Let's just read the little bits and not actually map the whole thing in a memory. And it was an amazing, within 32-bit address space, they showed off this incredible improvement, which even within 32-bit address space, now we're doing a lot less work now. And so it'll be faster even on slower CPUs and we won't need as much physical RAM. That's amazing. So I think actually that native debugger, we did end up moving it to 64-bit though, I'm pretty sure. But that was then orthogonal to like, hey, let's just consume less memory if we can, which seems to be the maybe the better fix, right? Or the more appropriate fix. Is that what the problem is? When people say, we want 64-bit Visual Studio, they come up with a solution. Is the problem they have that they feel the 64-bit is the solution to? Is that about running out of memory? What is the type of problem that 64-bit would solve? From what I have heard, and I haven't engaged with these customers directly very often, but the PMs have. And so what I've heard through the grapevine, is yeah, they're usually thinking about memory pressure. Being 64-bit has a few other fringe benefits, like I get the shape of a call stacks, the calling conventions are more hard, very fixed, whereas x86 didn't define those things. And so it makes certain things easier. But I think mostly for most of our customers, most Visual Studio customers aren't even developing extensions for Visual Studio, they're just consuming Visual Studio. Yeah, 64-bit, I don't know of any other really compelling thing that they might want, besides solving the memory pressure problem. Right, but it's something that keeps coming up, like every time we blog on the Visual Studio blog, like regardless of what the topic is, people are asking about 64-bit conferences to people, people come up and say, hey, when can we get 64-bit? And I often say, hey, there is already a lot of stuff that is out of proc. It's great to hear that Rossin is now out of proc 64-bit as well that solves a problem for that particular area. What are some of the next, do we have like current plans that you know of, of bringing more stuff out of proc? So we have a lot of these processes that you'll notice that load next to devend start with service hub prefix. So we've actually had that for years, and that is our convenient way, because it turns out this may not occur, we make creating a new process, a new exe, so easy in Visual Studio, just file new project, create a new console application, you're done. At Microsoft, introducing a new exe actually has a lot of baggage, paperwork. We need to know whether that process crashes, it needs to send telemetry, there's all sorts of things that we have to do. And so in trying to encourage feature teams to move out of proc, we want to lower that barrier of entry as much as possible. And so we offer service hub as, hey, this is a process that already is defined, just move your DLLs over to it, and we'll host you and we'll offer these services so that you can communicate back to Visual Studio. So we make it as easy as we can. And like I said, we've had this for years and we've encouraged, and many teams have moved over, and we are always encouraging more teams to do it. And we're looking for ways to make it even easier, to not only move, but have a higher fidelity communication between processes. And some of this has been documented so that extensions can host themselves in service hub if I recall correctly. If not, that's absolutely something we're looking at doing soon, if we haven't already. So we want that. They can, but I don't think we documented how to do it. So it's like, it's possible, but you will have to guess how to do it, I think. Which is not where we want to be. And I know that that's gonna improve in the future because we definitely want more, not only to reduce memory pressure, it increases reliability. A lot of our customers also use Visual Studio Code and a lot of them love it. I love Visual Studio and I love Visual Studio Code. Visual Studio Code loads very, very little of extension in PROC. Almost everything runs out of PROC, but they've defined the hosting process and the API such that you wouldn't even know it. Like it just feels natural to develop extensions out of PROC and that's where we'd love to be for Visual Studio. And when those extensions crash, you don't lose any data. Like your editor buffer and everything is totally protected and Visual Studio Code just says, hey, you want to restart the extension host process. That's a wonderful reliability experience that we'd love to apply to Visual Studio as well. But it'll take, Visual Studio Code was new and they learned a lot of lessons from us and other applications. And so they have a great architecture for that. We come with 30 years of experience and bruises and it's going to take us some time to gradually migrate to a world where more and more code is out of PROC, including customer extensions so that we can have some of those same benefits. But I guess it's fair to say, so Eric had a question here is if out of process is a usable path towards 64-bit Visual Studio and if so, are you investing in that? So I guess the question is, yes and yes, where it makes sense, right? And even for extensions, we're looking at how to, like you're saying, to be able to host them in that out of process service hop type of process. And that could then potentially also be 64-bit. And it's something we're looking into, but it's early, early days. So we don't really have anything concrete to share at this point, but it is definitely something that's, it's going to come in some form or shape. We don't know what it's going to look like exactly, but what we'll have to do, when we know more, we should do another show on that. That would be kind of interesting. Yeah. So today, Andrew, the Visual Studio is running on .NET 4.8 or 7.2, 4.8. We require 4.7.2. I think depending on the workloads you selected, we might install .NET Framework 4.8. Most people have 4.8 anyway, because of Windows updates, but we don't require it as a product. Okay. So yeah. So when you write an extension, for instance, for Visual Studio, it has to be in at least 4.7.2, but it can be 4.7.8 if you want to. If your extension targets 4.8, then you'll run on a subset of the installations. I don't know what size the subset is. So you could. But I should caveat this, because there's something I don't understand about it. When you're targeting Visual Studio 2017, which only guaranteed .NET 4.6 was on the box. I have, and some people have, developed extensions that targeted 2017, but required .NET Framework 4.7.2, which at the surface, I reason is fine, because I'll require 4.7.2. My extension will only work on that, but most of customers have that on the box, and so that's fine. But apparently there are some very, very subtle issues. I suspect it's depending on which APIs you access or whatever .NET Framework is in a mode, and this goes back to our hosting discussion from earlier. When you host a CLR, or even when you're just running as a .NET Framework app, your app.exe config file can tell the .NET Framework which version of the runtime you want it to emulate. So although you have .NET Framework 4.8 on the box, unlike .NET Core, we have as many runtimes as you want on .NET Framework at the singleton. And so 4.8 has to be able to emulate behaviors of 4.6, 4.5, 4.0, and it does that at a process level based on the content of that .exe config that says which version of .NET Framework you had in mind. And so Visual Studio has that too. When we host it, we say, look, we have, and back in 2017, we said we have 4.6 in mind. So the .NET Framework is gonna behave like a .NET 4.6 application. So if you load an extension that targeted 4.7.2, it might work, it might not, depending on whether you trip over one of those behaviors that are actually different between the two versions. Right, right. So does that then mean that if Visual Studio switched to .NET Core or .NET 5 at the time when that happens, would that be the answer to all sorts of things like mismatch .NET Framework, easier 64-bit or better performance? Like have we even looked into it? What's the state of upgrading to not even released yet versions of the .NET Framework? So that's a really interesting question. .NET Core promises a lot of performance improvements. It's exciting to see the .NET Core teams blog and we can drop .NET Core when we refer to .NET for five as it's branded to drop the core from it. But the blog talks about some of the significant performance improvements and we've heard from customers that they are impactful and memory, so memory and speed improvements are there. As far as version differences, their .NET Core, I believe, does not support multiple runtime versions in the same process. Although their APIs allow it, so maybe they're allowing themselves the freedom to add that in the future, but if I recall their document is that you can only load one. So even if hypothetically Visual City moved to .NET Core, we would still be deciding what version of the runtime everyone in the process would use. So if there was an extension that targeted VSX, where X was the first one that, let's say it was .NET, okay. And somebody was targeting that version of VS and then on the next version of VS, we loaded that same extension and that version of VS shipped with .NET 11, they would be running on 11, not 10. So this whole .NET Core side-by-side runtime thing is awesome, but it's a per-process thing. So you can run VS 30 and VS 31 on two different versions, literally different versions of the runtime as opposed to .NET Frameworks Emulation. You could run them in the different versions of the runtime and that's great, great for isolation, great for reliability. But the extension would need to target the least version of .NET Standard or .NET Core that it wanted to support for VS. If that answers your question. Yeah, I was about to say like the .NET, doesn't .NET Standard remove that issue? So you can say Visual Studio 20205, support .NET Standard 5. And so any new version of Visual Studio after that that also supports .NET Standard 678, they will all support .NET Standard 5, right? Because I think that's the way it's worked so far. They're backwards compatible. So does that solve some of this? Yes, and kind of no. Yes, it does solve it. And that is the idea of the .NET Standard. However, the .NET team has announced a few months ago that .NET Standard 2.1 is the last version of .NET Standard. Since .NET Core, 3.1 is the last version of branded as .NET Core, and the next one is .NET 5. And because it's part of .NET 5 slash 6 now, mono and the mono runtime and .NET Core runtime are being, well, they'll share a BCL and they'll be, I guess some sort of a pluggable or swappable runtime underneath your application based on what platform you're targeting. And I don't know all the details of that, but basically everything's going to be branded as .NET. And there's no more .NET Standard because there's just .NET. So you can target .NET Standard 2.1 if you want today. And then, but you've already given up .NET framework. So .NET Standard 2.0 is the last version where you can target both runtimes. So if and when Visual Studio moves to .NET 10 or .NET 5 or whatever it is, that's what people will target. They won't target .NET Standard because that'll be years old. And it'll be a tiny subset that doesn't matter because once Visual Studio is on .NET Core, it would be, you would just target that version of .NET. You'd be targeting .NET 8. It wouldn't be a standard to worry about. So if you were to have any .NET Standard 2.0 code in your Visual Studio today, or if you were writing extension and you have .NET Standard 2.0, then that's relatively future proof in this world, isn't it? I mean, obviously you can't do .NET Standard code against the Visual Studio APIs because they're done at framework only, right? But the underlying business logic or whatever you might have would continue to work into the future. Also if it's 64-bit, right? 32, like that doesn't matter at that point, I think. Is that right? So that's a different axis. But wait, so we'll talk about the .NET Standard first. Yes, to the extent that anyone, whether you're running code that's going to run in Visual Studio's process or any other process, to the extent that you can target .NET Standard 2.0, that is a fantastic idea. It's great at future proofing and allowing you to run, not even just future proofing, it's just today proofing. You want to run on mono.netframework.netcore, .NET Standard 2.0 is the way to go. It's awesome. As far as the 32-bit versus 64-bit switch, both native and managed code have to be written with multi-architecture in mind. Most managed code is multi-architecture automatically. It's actually because most managed code doesn't use pointers. And when they do use pointers, even that doesn't tend to be too problematic. And native code, again, most native code can just be recompiled for 64-bit unchanged. It's the edges, though. Both native and managed code can do this. In native code, you might use int where you should have used a pointer. In fact, this shows up in our interop assemblies. Back in the day, when we were writing our calm interfaces in Ittle, there was syntax that we could use to say this represents a pointer. Another syntax to say we want a 32-bit or a 64-bit integer here. And we use all of these. Occasionally, in these Ittle files, we were dealing with something like a handle and internally, say the project system, is filling this value with a pointer. But the Ittle provides it exactly 32-bits. Well, crap. We should have made that interface be pointer-sized instead of a fixed 32-bits. Because when we recompile that native code as 64-bit, now the project system is broken. Because it thinks it can cast this handle to a pointer, but it can't. Because the pointer is now 64-bits. And so you lose data on one side and on the other side, you get a corrupted value. So that can happen in the native code. And on the managed side, too, if you're writing an interop with native code, you might have a struct. And the headers say, hey, this is a pointer and you could have gotten away with just using a 32-bit integer and got, you know, been fine. And the 64-bit process, that same managed DLL, fails. And so in C sharp, you can tell it, look, this is specifically a 32-bit image DLL or 64-bit or an NECPU. And NECPU is the default. It doesn't mean you're going to succeed in both architectures. You might. You probably will, unless you are doing interop and then you have to be careful. Okay. Andrew, this is super cool to hear about all this stuff. We're at the end. We got like a comment here from Jonas. He says, this is not a question. I just want to say thank you for an amazing developer experience. So thank you for that comment, Jonas. That's awesome. And then Eric has a question here as well for you, maybe, Andrew, because this used to be owned by you. He asks, any plans to improve or release the new project system API? I assume he's referring to what we call the common project system. I would love to say yes. We started that in Dev10 with the idea that this would be the new project system API for so many reasons that hasn't happened yet. I can't say what the plan is for that. I'm afraid I can't answer that question. I'm not on the team anymore so I'm not even privy to it. And if I were, I'm not sure I could say. Okay. All right. Well, we'll try, right? Thanks for the question there, Eric. Eric is one of our top extenders. Well, thank you, Eric. Yeah, absolutely. So we're at the end here. So, Andrew, thank you so much for joining us here. It was great having you on. I hope we can do this again some other time. I assume we're going to get a lot more questions in the months to come of very technical nature that you can help shed some light on. So thank you for that. Pleasure. And to everyone else watching live, thanks for the questions. And if you watch this on YouTube, make sure to subscribe to the Visual Studio YouTube channel. You will see in the description below there should be some links to both Andrews and my own Twitter so that you can stay in contact and leave a comment. We'll monitor them. So thank you so much for joining. See you next time.