 Good morning everybody and welcome to this week's episode of the Visual Studio remote office hours My name is Matt Christensen and I am Delighted that you are joining us here today because we have an exciting show for you We're going to talk about machine learning and artificial intelligence intelligence rather inside Visual Studio because we actually have some inside Visual Studio and what does it do? How does it help us and what will it do for us in the future? These are all great questions that we're going to look into now I will give you a little update on my home office because I kind of do that every time Here's my latest a little gadget that I bought. This is a USB speaker. You can see there's a cable here That is just up on my pegboard here on the My workbench so if you go there's a picture of what it looks like today this morning So if you go check out my Twitter, you can see what that looks like, but this is basically like a $10 USB speaker and it's a hell of a lot better sound. Let me tell you then then sort of a laptop So that's awesome. So I'm gonna put that back up there Slowly getting into a more professional estate here in my very unprofessional home office So with that out of the way, let's say hello to our two guests and so Katie, why don't you start by introducing yourself? Hi My name is Katie and I am one of the program managers for Visual Studio on telecode. I'm looking forward to this conversation today Awesome and mark Hey, yeah, hi Mads. You and I have worked together for a long time. I'm a program manager I've been a program manager on Visual Studio for like 12 years something like that now. I don't know. I'm getting too old and now I'm Working with Katie as a program manager on the IntelliCode team been at this for like two and a bit years now and Thoroughly enjoying it. So looking forward to chatting with all the folks about what we can do with AI Fantastic, yeah, I think Mark. I think the first time I met you you were the program manager for the Visual Studio editor So just the editor itself that dealt with you know Syntax highlighting and intelligence and all that sort of stuff. So this is this is quite some years ago, I guess So for the people that are online, please remember that you have on your right side of the screen You have a Q&A panel. So that means that you can ask us any question you like Anything for Mark and Katie about machine learning and AI inside Visual Studio But you can ask any question about Visual Studio and you know with three people here that works on the visual studio team There's a good chance that we're able to answer that. So just keep the questions coming and we'll answer them as we go along Okay, let's get into it so Mark I Know that you are on the you're well, you're both on the IntelliCode team and I've heard a lot about IntelliCode I think I know what it is. But every time we talk I learned that it is more than I think it is It happens every single time like it's expanded that the role of the machine learning is is doing something I didn't even know that it used to do and stuff like that. So things are moving quick. It seems like But what is IntelliCode? Can you kind of give an overview of what it is and how we use the machine learning and AI? Yeah for IntelliCode. Glad to do it Mads. I mean, you know, I've been at this I say I was looking back a bit yesterday and realizing that I've been playing this game now for Two two and a bit years working on IntelliCode and we first previewed some stuff way back in 2018 at build But we've really kept the same focus all the way along, right? So from the start we were looking at ways that we could use the power of the machine to delve into Your code as data and the things wrapped around your code as data and Present back to you based on the kind of wisdom that it harvested and is still down Some useful insights, right? Some useful insights that can actually help you to do your job In a more effective fashion and so when we talk to customers when we talk to developers we found that You know, there were various areas where there were struggles in the you know, the development life and they for instance You know, they might say well, I'm coming at an unfamiliar API I really don't quite understand how to how to use that API But I could really use some assistance or actually and this is a conversation. We've been having more recently I'm refamiliarizing myself with an API. I don't quite know what to do You know, I'm really trying to get myself back up to speed again whilst using this thing and I want to be effective and So that got us into thinking about how can we do better in terms of not just The completion list but other places too in terms of presenting the right stuff to you So helping you code with confidence, right so that you can actually get right into something and the machine can help you out By presenting the things that are your most likely pattern choices, right? So the power of pattern is really a big theme across the whole of IntelliCode, right? That machines can do stuff now can delve into patterns that they were simply not able to do a few years ago Okay, and all of our code and all of our things around cable I kind of call code metadata all of that stuff is like a substrate There's a big scientific word for you but you know just a place where things can grow or where we can grow that insight and And bring it back to you, right? So that's that's really the the founding insight was that that we believe that that Machines can grab grab those insights and bring them to you at your point of need and then we talk about you know How that can help you out when you're trying to find issues So think about anti-patterns think about things where you your your team has done a great deal of work Perhaps to to go ahead and fix some problem That's actually a common anti-pattern that keeps coming up in every single flipping code review and it drives you nuts And what about if the machine could spot those patterns and put those fixes out there for you so that you didn't have to keep reminding everyone in every single code review That would be neat wouldn't it and so we talk about that as being kind of how we can help you find issues faster And then we talk about focusing your reviews So you heard a little bit just there about how is code reviews can become messed up with a whole bunch of repetitive tedium We were looking at ways that we can relieve that pain as well But all of this it comes down to the capacity of machines to learn huge things across giant substrates That would be really hard for a human to learn and bring those useful insights back to you right where you need them That kind of makes sense. Yeah, I think that's that's really helpful. So You know, it reminds me like 10 years ago when I first started on working on visual studio at Microsoft I had this whiteboard in my office and at the very top I had written in big letters the thinking IDE and the the idea was that an IDE should be sort of anticipating your next move and make that as seamless as possible And it was sort of a pipe dream right it was more like a vision or more like a North Star to shoot for but What you're talking about is seems like that it seems like a way that We don't have to deal with this sort of the trivialities or the mundane things of programming We can go straight to solving the problem that we were hired to do And there's don't have to worry about like I guess like formatting and other such things and in a code review that seems trivial Right exactly. So and it doesn't have to be restricted to the trivial either. That's the thing So, you know, these can be reasonably complicated things I mean, we've got some stuff to show you later that I think you'll find fun That will help you find actually quite complicated things and repeated patterns that you know You maybe wouldn't even have had time to spot and might have bitten you on the posterior later on and You know, that's that's something we want to make sure that you know, we let the machine loose on that kind of stuff as well So it's not so much just the mundane and the repetitive But it's also the stuff that is too big for you to wrap your head around because you just don't have time Right, and we'll talk a little bit about that. You know, maybe a bit later when we talk about suggestions All right, so that makes good sense. So it's both the mundane and the complex so when When IntelliCode first was introduced. It was it was doing something in my intelligence. It was augmenting the intelligence And in the beginning, I didn't really know what to make of it Like I it was, you know, my cheese got moved something changed the intelligence had looked a certain way for Like maybe two decades or something now all of a sudden it was different. So What happened there and how how did how did that solve problem? How do I guess I my big question is how and why did you do to Insult into what you did and can you explain what it was that you did Katie? Sure. I Don't know if you're still hearing me. Okay, cool So, yeah, so what what we did was we actually Learned and trained on open-source repos and we learned just like the common usage of various classes and What you're seeing? Sort of as you see those sort of autocomplete pop-ups is you're seeing sort of what it what are those like top use or most used or commonly used Types or or functions or methods for that class so it really allows and enables developers Who may not be quite familiar or might just forget or not recall What is the right function or the right method to use in a certain circumstance? It allows you to sort of provide provides you with sort of contextual Method recall as you're typing so this is very very helpful for a number of scenarios So imagine if you're a developer who's just onboarding To a particular class or namespace this really just sort of gets you Sort of helps you sort of assist you as you're as you're typing It's also super valuable for even this more seasoned developer Who might have not used a particular class in a long time or might just be? you know Not super familiar with like an internal library for instance with one of our other features team completions And sort of gets you sort of set Without having to navigate to the web or have to navigate to stack overflow. You just have everything at your fingertips Okay, so That's really cool the it seems like Being able to figure out how to use something based on how other people use it Is pretty much what we do when we go and Google something right and or we go to stag overflow to find out like how do I use? You know this method and this class from this library that I just installed But now that's just that's just right there in my intelligence list and that's and that's all coming Is that all from github or is there any other? Sources to that and how do how do and how do you determine what sources to choose from? Okay So We determined which sources to use so they're right now are Are our base completions model which you're seeing? You know as you start typing in Visual Studio Visual Studio code. That is based off of Open top open source repos and so those are the repos that hope have over a hundred stars So we've we've done a couple things so we've trained our completions model based off of This learning of these open source repos so like seeing these top repos and just learning from those But we've also sort of curated that learning so we've we've actually it's a supervised learning model where we're we're actually like Saying which ones are the top even within those top repos What what are the top most use? Types are classes within those repos to ensure that you're getting the contextual usage not just sort of like The methods or properties that are used Just broadly you're getting sort of like a more intelligent or more catered sort of experience or tailored experience So I guess to answer your question Mads This is What we're providing for the user is not just sort of a comprehensive Dictionary of methods, but sort of providing them with the right Methods the right properties to use at the right moment and when they need it So we don't want to provide you everything. We sort of want to just provide you what we believe you'll actually What will be useful in order to enable you to be productive at the moment when you're trying to use it Okay, that makes sense so I don't know. Maybe it was like a year ago I came to I came to you mark and I said hey, you know visual studio extensibility is a kind of a complex Set of API's so if you've ever tried to read a ride of a studio extension, you know that All the APIs there are maybe not so well documented and it can be hard to find out Exactly how to use them. So I went to mark and I said hey, can we can we index a bunch of sample? Repositories on GitHub and other apps that use these so that we can give better IntelliSense using those machine learning models for all these API's And and mark was so so kind to say of course and you know, what did it take? Two weeks or whatever and boom it was there in the product and everyone benefited and we actually made it so that IntelliCode was now a Optional but a recommended Component when you installed the workload for visual studio extensibility So I think that was an example where we took we saw there was a niche problem granted. It's a niche It's not the biggest space in the world right which was three extensions but we were able to like tailor and optimize the the machine learning for that so Mark is that possible if people run their own nuket packages? They have some users of those nuket packages. Can they also ask that you index to improve their users Experience for their APIs. Is that a thing? Well, there's a couple of answers to that question Mads and you know, I want to sort of start from the top One of the first things we kind of heard from people when when they started to use these that these new starred IntelliSense suggestions was exactly what you've said. It's like, hey, you know, I use this library or that library, right? But it's not super common, right? Just exactly like what you were talking about with VS extensibility Not, you know, it might not reach the bar of being in enough of the open source GitHub repose for us to pick it up and the included in what Katie was referring to as our base models, right? So how can we solve that problem? And and for some libraries absolutely where there's there's usage out there we're happy to hear from people and asking us to put certain library sets into our base models and that can be a very effective thing for us to do if we know Where there's great samples to be had, right? So we can train on it pretty much any GitHub repo if it's open And so if we know it's important, you can point us at it and we can go at it and do exactly what we did for the VS extensibility space. But I think I'd want to ask Katie to talk to us a little bit about some of the things that are going on for cases where it might not be something that's for everybody, but where it's more about your own internal code bases and you've got internal classes and stuff like that, right? So that's the other problem that comes up, right? Once people get hooked on this stuff, right? And what we've heard from people is that they kind of like getting these stars, but they were upset that they weren't getting them for other classes they were commonly using. Sometimes those other common classes are actually in your own code. And Katie, I don't know, do you want to have a have a have a bit of chat about what we're doing for that? Sure, Mark. Yeah, so so what Mark is referring to is a feature that we call team completions. And so as Mark was mentioning, you can currently get IntelliCode star completion suggestions for like from the base model, just as you're typing. So it's predicting what you're going to what's next, but that's based off of common usage. And that common usage, as Mads and I were talking about earlier, is based off of training on open source repos or the top open source repos. But what if you're working in an internal library or you're working on code or type or for types that aren't commonly found in open source repos? So what do you do? And so for us, we have now enabled team completions. And team completions, it not only like augments that sort of base completions model that Mark was referring to with your custom types, but it automatically also shares that with everyone who has access to your code base. So so that is the super cool part of like, let's say that you're working on in a get repo. And I want to train a team completions model so I could so I can get those starred completions for my own types. Well, once you do that training, you've also shared that with the rest of your team automatically anyone who has access to that get repository automatically gets all of those the starred completions for those unique types, which I think is super, super cool. So, yeah, I think that combined with the context of your code, you can just imagine as as you're typing and as you're developing, you're starting to not just code faster, but you're also just sort of being assisted along the way. So you can sort of stay in that developer flow with having these starred completions. Okay, so if I have an open source library, for instance, or a new group package that people are using, can I then do this team completions for my own for my own set of APIs or whatever that I've developed, and then whoever has access to my GitHub repository will also get the benefit of those completions. Is that is that what I hear, Katie? So it's somewhat like that. So if it's a Git repository, let's be very like it will be specific. So for right now, we've enabled this for just Git repositories. And so if it's on GitHub or if it's an Azure DevOps and it's a it's a Git based repository, then yes, you can have a manual training. And then that would once that manual training, that team completions model will be attached to that repository and shared with the rest of your team. However, there was a part that you talked about a new Git package. And that's something that we have on our backlog. And we've been thinking actually very recently about, but we have not enabled attaching team completions models to new Git packages yet or those or those libraries. So we do have that in our mind that we as a top scenario, enabling library owners to be able to train a team completions model and then distribute that with to the rest of their library users and also vice versa. We also have the idea that we love for a library user to be able to train for a new Git package. And then, you know, with the consent of the new Git package owner, of course, be able to distribute. So we are actually planning on lighting that up. But right now team completions is just scope to Git repositories. And I don't know, I think that Mark might actually have something to add here. Yeah. So I mean, Katie's really put it pretty well there. The game is a little bit different. So because at the moment, Mads, you could come to my office or someone could send me an email, right? Or send Katie an email and say, hey, I really, really want my package to be included or my set of packages to be included. Those can get included today in our base model. So we have a way to do that for things that we think are important. But obviously that's kind of not going to scale to the sheer number and variety of packages that are out there. And so as a package author, as you were saying, we can't have every package author be banging on our door and saying, hey, can you include me in the base model? Also that kind of doesn't scale out well in terms of having the base model getting bigger and bigger. So what we really want to do is to make sure that if you're consuming a base model, sorry, then you get package rather, you get the right model for that package. But that requires all the things that Katie was talking about to be solved in order to solve for that. We need your consent as a package owner to go trade. We actually need something that you may or may not have, which is usage, right? We need sample usage of your code. And that's because the model really only learns by spidering across usage, not so much looking at the API itself, but looking at how people use that API and in all the different contexts that it's used. And it's most useful when all of the contexts are expressed in that learning data. So it's not as easy as it sounds. And it's a problem we really desperately want to crack as Katie says it's on our backlog. But we haven't got there yet. And I'm super interested to hear back, you know, if there's anybody who's got packages right now, what I'm Katie and I would be love to be in touch with you about what that feels like in terms of what would your ideal workflow be? How would that fit in with you? Would it be just like that you, when you publish and you get packaged, the work of doing a training gets done automatically for you and somehow gets pushed up? And how do you see that playing out for you? Would that work well? Would there be something else that you would want? Please, you know, all of this stuff is only going to work if we can get all of the ecosystem people who are actually playing the game in on how we're doing it. Yeah. So I guess the whole scaling thing makes good sense. Of course, it wouldn't scale if everybody's packages was in the base model. That makes total sense. And scaling is a hard problem to solve. I don't in view it. Come up with a good design for that one. Yeah, fortunately, there's an existing piece of scaling called NuGet, which basically lets us scale packages. So we're keen to follow the patterns that package managers give, but we just want to layer ourselves up on top of that as much as we can. That's how our minds are on this at the moment. But if people want to tweak our minds on it, we're more than happy to have discussions about that. Yeah. So if you watch this live, please give us a comment in the Q&A panel on your right side, or if you're watching this on demand on YouTube, comments are below. Please help us get some insights here for Mark and Katie. So just to change gears a little bit, you are doing IntelliCode, which originally was the augmentation of IntelliSense. But it's become more. And as I said in the beginning, every time I ask, what are you working on, it's always something new and different than what I thought it was going to be. So beyond completions, or that's what we internally call IntelliSense completion. What else do you have going on in your team right now? What else are you tinkering with, Mark? Wow. So I mean, there's so much. So, so much. And when I look at the horizon of what we're looking to do, we're really trying to help developers to, you know, I say beyond completion. Some of these things are actually just extending completions. Like, so for instance, you know, we're doing some work to make completions even better in places where they're not to extend their scope. But we're also doing stuff. And Katie was kind of alluding to how do developers learn? How do they understand? There are lots of steps. So there's a time, for instance, when you don't even know what API or even what library it is that you want to use. And typically right now you're going to be out on GitHub. It's not on GitHub on Stack Overflow or Bing or name your search engine of choice here. You're going to be out there searching for solutions to different problems. But if your goal is a code snippet, why wouldn't that be inside VS? Another one that we're looking at is looking at repeated patterns and, you know, how we can help you with that. So when you're editing, I don't know if you've ever come across this scenario where you introduce something new into your code. You're doing some refactoring somewhere. And, you know, you then have that really tedious task. Maybe you've introduced a helper function of going ahead and introducing that in all the places that it applies in your project, or maybe even in other projects. And particularly if that helper function contains, functionality that fixes a bug, you might even, if you forget to introduce that helper function somewhere, you might be having hidden bugs parked in your code somewhere that you haven't fixed. Or maybe your team's developed a new pattern for doing something that helps avoid some trouble. You know, maybe you've got a particular way of dealing with certain threading constructs, or, you know, you've got a way of catching if blocks in a convention for the way you throw exceptions and blah, blah, blah, blah, blah, you know, teams have these things and they have them for a reason. They learn these things by grazing their knees and by actually doing stuff and having trouble hit them. And then the team gets together and they figure out some solutions. But the real trouble comes when they want, when you want to basically codify that and make sure that you don't miss it or miss opportunities for it in other places. And that's where the power of the machine comes in to help you out, to help you figure out where those locations might be. So if I can, I'd like to show a teeny tiny demo of how that might work. Just start sharing your screen and I'll get it on. Just bear with me for a moment and I will share my Visual Studio. Let me know when you're seeing my screen. Yep. Oh, hang on. There we go. Oh, I'm just in the middle of Visual Studio right now. This is just a preview release and this is a feature that we call IntelliCode Suggestions. And so what's actually going to happen here is I've got this crafty code here, which does a Fahrenheit to Celsius conversion. And I'm just going to go ahead and replace it because I've got myself a helper function that actually does that for me. And now I've got to go through my whole program and find all the places where that might apply. And that's, you know, not necessarily an easy task because, you know, the variable names I use might be different. There's some slight changes of formatting. So a straight up find and replace might not do this. But what I find is you might have noticed as soon as I finish that second instance there, something has happened. Okay. And over here I've got a new thing. Okay. It says show IntelliCode Suggestions based on repeated edits. It's been watching me as I go. Okay. And we'll talk a little bit more about how that works in a moment. But it's been watching me as I go. And when I click on that thing there, I see that it's saying, okay, there's another one of those at line 124. I double click on it. And here's another light bulb. And it suggests the exact fix that I need based on the pattern that I was typing. Okay. So if I take that, sure enough, it's fixed it for me. Okay. So you can see what's going on here. I'll just stop sharing my screen now. And we can talk a little bit about how that works. So the magic there underneath the hood was a thing called prose. Now prose is a little bit different from your standard machine learning and AI based things. So what Katie was talking about with the base completions and the team completions that we've gotten into, those things are all based on a machine learning algorithm or an algorithm that works across a relatively large corpus of code and learns a bunch of patterns based on it, sort of after the fact in frozen code, if you like, that code is actually learned across at one moment in time. What you saw there was it was dynamically learning. Okay. And what prose was doing was it was tracking my AST, my abstract syntax tree deltas. Okay. So it looks at my code and it says, okay, when Mark was typing that thing, this kind of change occurred. And it was like, it was a tree that looked like this. And then it turned into a tree that looked like that. Okay. So it was able to deal with the fact that, for instance, the variable names were different and some of the constructs might be different because of that. Okay. It looks at those deltas and once it finds some common ones, it tries to synthesize up a transform program that lets the user get that fix that I was showing to you. So the thing that takes you from before to after, that's like a transformation program or a synthesized suggestion, that is made by example by clustering, okay, those deltas in the ASTs as I go. And having learned that rule, okay, then it can apply it anywhere in my code that I open up. So that's pretty natty. I like it. I like it a lot because it actually is learning. We've talked about how it learns from the kind of wisdom of the broad community in terms of the work that we have in completions. And then about how we can learn from the wisdom of the narrower community on your team when we're talking about team completions. But now we're talking about learning from what you type, what individuals type and then eventually we're thinking maybe we get that knowledge out to the team as well. Wouldn't that be cool? If those rules that we found were something that were actually a transferable asset, wouldn't that be a neat idea? Anywho, this is something that's just a different kind of learning. So we're not hung up on the idea that everything we do has to be AI in the conventional sense of that word, that it has to be learning across large corpuses, that it has to be using big fat models or even slimline models. We will do anything to have the machine find a pattern for you and help you apply it at the right point of need. That's the real key. And so that second piece that you saw there is an example of how we're doing that with pros with IntelliCode suggestions. And folks can, if they want to try this out, they can try this out in any of our recent previews. So go download the 16.6 preview bits right now and you can try this out. Just go to the IntelliCode preview box and turn it on. It's really easy to try out and we'd love to hear the feedback on that. But that's one of the things that we're playing with as a team is to try and expand the kinds of learning that we're willing to take on. All right. So you bring up a good point. Go try the latest Visual Studio preview, which is 16.6 preview something. But IntelliCode also works in Visual Studio Code, right? So Katie, how does people go get the Visual Studio Code one and is there a benefit one over the other? Are they or are they the same? It's sure. So the benefits of Visual Studio versus Visual Studio Code, honestly, I think it's a matter of preference. So I think that there are a number of developers who are accustomed to using Visual Studio. And in Visual Studio, right now, the feature that I mentioned previously, team completions, right now, we've only actually enabled that for C sharp and C plus plus within Visual Studio. So if you're deciding whether or not you want to try team completions, right now, if you are on Visual Studio Code, we haven't actually expanded there yet. It is on our roadmap and in our backlog. But right now, if you're C sharp or C plus plus developer and you'd like to have custom completions tailored to your code base, you would have to be on Visual Studio. However, we do have base completions. So getting those starred completion suggestions for JavaScript, TypeScript, like right now, we have C plus plus. And we also have, I'm probably blanking on Python. I'm probably blanking on a couple SQL. So I think that's about it. XAML, SQL, Python, JavaScript, TypeScript, C plus plus, we have that available in Visual Studio code. And the way to get that is you would actually have to go to the Visual Studio marketplace. So like the Visual Studio code marketplace. And you'd have to install the IntelliCode extension. And you could also do that directly in Visual Studio code. So the other thing I would say is that we also are preview feature that you can actually try out in Visual Studio preview, as like Mark suggested previously, and as Matt sort of alluded to, you can try out all of our latest features, which we tend to not always focus on Visual Studio, but a lot of a lot of our previews have sort of started in Visual Studio. But you can go to Visual Studio and you can access a number of our preview features there. So yeah, so I don't know if there's a which one is better. I think it is a matter of developer preference and what you are accustomed to working in. I do think that Visual Studio does offer like a very more comprehensive experience. And if you are familiar with setting up your developer toolkit there, toolset there, we just sort of augment that experience. All right, very cool. So when I see Mark, what you just showed, and other things that I've seen in IntelliCode do, it's like, you know, it's close to magic. And you know, and I realize I also don't understand sort of ML and AI concepts very deeply. And so there's a saying by someone that if you don't understand it, it's indistinguishable from magic or something, right? But what kind of reactions do you get out there for people using it or seeing it for the first time? What do you hear, Mark? Well, I mean, it's interesting, you know, it varies from shock and the magic response to how on earth is it doing that to suspicion sometimes. Like, I'm not sure how it could do that or why it would do that. But, you know, one of the things I think we're always trying to be in the IntelliCode team is kind of humble about the technology that we're using. And, you know, it's not always perfect because these things are based on algorithms that do not always produce perfect results with 100% certainty. We always want to say that the things that we're producing are either a suggestion or something of that sort. And the reason why is because we don't want people to think that this is like, it's not like static analysis. So if you think about the, and even that can get it wrong, of course, as we've, as if you've been around static analysis long enough, you can know there are false positives there. But if you think about the suggestions code that I just showed you, the amazing sort of ability to spot patterns, does it get that right 100% all of the time? No, it does not. But what it does is when it does get it right, when those suggestions are good, they essentially end up being something that can be almost like the analyzer you didn't have to write. Okay, it's, it's basically, because not many people have taken the time to go and write a linter or an analyzer or something of that sort. But when we can actually do this right for you, we can really add value. And so we're on a delicate balance between trying to make sure that, you know, we surface the suggestions that are useful to you. So for instance, in that suggestions code, one of the things under the hood that's going on there that we learned quite early on is we really want to make sure that even AST suggestion, delta suggestions that we get, that might be true in terms of the transforms, you know, they may actually be a correct observation, but that which, which increased the number of errors in your code, those things we don't currently suggest, it may be that they're valid and it may be that in future we actually have a bit more control over that and say, actually, I'm perfectly happy to have my code to be in a broken state in an intermediate. And you know that when you're editing, sometimes that happens, right, that you'll, you have to go through broken to get to good. But initially, in terms of making this useful to people, we felt that the sweet spot was that we need to make sure we don't cause suggestions that would make the code break. Now, you know, that's going to change over time. But, you know, in terms of reactions from people, people generally don't react well to suggestions that make their code break, unless they know it's going to do that, unless they know what the nature of the thing they're applying is. So, you know, that's one of the things we've heard, but and that's part of what's gone into, you know, the suggestions set up as it stands right now. But also that notion of humility in terms of the suggestions that we make, I think you'll always find that. So for instance, if you think back to the IntelliSense example, IntelliSense will show you everything that's tight valid for the location that you're typing. Okay, when you do a method or a property or an argument, those things, if they're valid code, then IntelliSense will show them to you. This is the pre IntelliCode IntelliSense, right? It will show them to you. All we're doing is kind of sprinkling some recommendation sugar, some suggestion sugar on top of that, right? And making it say, okay, but we think, given your context, you might want to do that. And notice I'm using think and might there. We're not saying you'll definitely want to do that. We're saying you might want to do that. So it's very important with these AI assisted tools that we make sure we understand that and that we express that in the kind of experiences that we surface. So I would say that's been true across everything that we've done so far. And I anticipate it probably stays that way. Yeah. Yeah. So it's not like a moment in time. Like the AI technology has just not evolved enough to be more that we can be more prescriptive saying this is what you want to do. So right now we're just saying, oh, maybe this is what you want to do. And we suggest something. But is that just because technology hasn't caught up or? I don't know. I don't know that that's actually ever going to completely go away, Mads, because there's such a broad spectrum of things that are possible to express. You know, we get more precise and more confident and we spend a lot of time obsessing about precision and coverage, which are two key metrics that we we measure all the time and really precision is about how often we get it right and coverage is about how many places we can get it right. And so those things are things we are deeply bothered about. We want to drive that number higher so that we cover more stuff and we cover it at a higher precision. That's always goodness. But getting to 100% is not necessarily our goal. Being useful is our goal, right? Being useful to the developer and helping them to move forward faster. Those are the goals that we care most about. Right. That reminds me of like it's sort of a mantra that we have here in my family is like never let perfect get in the way of good. There's so much good to be had and it doesn't have to be perfect to add a tremendous amount of value. Yeah. So I've heard like some people be a little bit concerned about AI, right? Like is it going to take my job? Like where are we in, you know, with the AI and machine learning inside Visual Studio? Like are we getting closer to being able to automate development as a whole as a discipline maybe? What are the sort of concerns that you hear from people and what are your answers to them, Katie? So I'd love to talk with Mark about this too, because I know Mark and I talk about this a lot. But from my perspective, we are so far away from that as a discipline. And I think that as Mark was highlighting in his previous comment, I think that like we're really the goal of what we're trying to do and embedding or integrating AI and machine learning techniques into Visual Studio, Visual Studio Code, our main goal is to promote common usage and sort of promote those common practices, best practices. And we're at the point, I think in the in the juncture in the grand scheme of things, I think we're very, very nascent. And we're at the point where what we're doing is providing just helpful contextual examples. So that's what the completions does. That's what suggestions as you saw Mark demo previously, that is what that is doing is providing suggestions that really just sort of like help you remain in the dev environment and help you sort of like keep in that flow, that developer flow and not get distracted by going to the web and like searching this and then seeing those shoes that you've been wanting to purchase or seeing that $10 USB speaker that you're like, I want to get that. It helps you sort of keep focus on the task at hand. And I think we're just we're really far away from automating people's like dev jobs away, developer jobs away. And I don't even think that that is a goal that we ever actually intend to to reach. Our goal has been mainly just like how do we enable developers who already are sort of like a very valuable resource? How do we keep them to just keep them focused on whatever their task is? How do we enable them to be as productive as they possibly can and not productive for you know, you know, the man or like the business, but being productive for their own selves? Like how do we actually provide them with the tools that as their learning development and onboarding to coding that they're actually able to see this as of like this is actually a fun task. There's some sort of like ludic engagement aspect to like just actually just development. So I think from our perspective, that's sort of our spin, but I love to like have this conversation with Mark as well. So I know Mark has, you know, is, you know, very much like on the edge of his seat to answer. All right, Mark. Let's hear it. Oh, man. Yeah. I mean, this is this is the heart of what where my head's been for the last couple years, you know, that this is not at all about taking away people's jobs. And as Katie says, I mean, even technologically we are far from that, but it's just not what we're about. We are about enhancing the experience of development about making development. Yeah, I mean, I dare to say fun, right? To actually make it less painful and less difficult to stay in the zone to stay in the that I love the word ludic. One of my great heroes was Bill Hill, by the way, who sadly passed away. And that notion of ludic engagement and coding, that notion of being, you know, I'm locked into this space. I'm thinking about stuff. I don't have to be distracted away to some other place in order to go and figure out what I'm doing. I can stay in the zone and the tool just helpfully augments me, right? Helpfully augments me in the other right moment with the knowledge that I couldn't otherwise get. It's giving me superpowers, basically. It's giving me superpowers by saying, I can now as a developer know, for instance, all the patterns that I have out there that my team has developed. So I don't have to worry about tripping over that banana skin whilst I'm in the process of figuring out how to implement this great piece of technology that I'm actually implementing. My head's about the technology. It's not worrying about those other things that I might be missing. Or, you know, let's imagine, and I know, you know, maybe I'm just getting old a little bit, but here's the thing. We know a lot of APIs, but we sure as heck forget a lot of APIs as well. So I don't want to have to be going back to the documentation to reboot myself into the zone when I hit that piece of code that I haven't touched in three years, right? I want to be able to know what the common usage is right now so that I'm not, you know, losing my context, losing my vibe. Really, we want to make development more pleasurable. We want to give developers superpowers. We want to give them that capacity to get more done, not just because it's kind of good for business, but because it's good for the soul. Anyway, Katie, Katie, I'm sure you want to come back on that, right? Sure, Mike. Mark, yeah, I really wanted to actually like sort of segue this into something that IntelliCode provides. So an example of how we're actually trying to make development more helpful and also more pleasurable, as Mark was mentioning, is that we're trying to like reduce the arguments that are had over very like, you know, sometimes you think like trivial things, like for example, your code styles, that happens all the time or formatting. So do we have the tab or do we have like two spaces or like three spaces or four spaces? And things like that are often just like fights or not necessarily fights, but conversations that go on for way too long that actually distract from your ability or detract from your ability to actually just code. So how do we actually solve that? So I'm actually going to share my screen and show you an example of how we're actually trying to aid developers and development teams from having these sort of back and forth arguments about code styles and code formatting. Okay, so as you can see my screen, I've loaded a solution and the feature that I'm actually going to demo is a feature called Invert Editor Config. So if you're familiar with Editor Config files, an Editor Config file is a cross-platform file and there is a common syntax, defined syntax that sort of defines sort of the code styles and code formatting conventions that exist within a solution or project. So typically what developers would do is they would write an Editor Config file. What happens if you're in a team and you don't actually know like you just sort of you have this project, you don't actually know what the styles are actually already exist there, but you have sort of like some people have opinions. How about you start from instead of having opinions, why not start from an analyzed like an analysis of your current code base and the current code conventions and the current code styles that exist there and then from there determine what it should be and have those very meaningful like bring the data to the to the conversation or the debate. So, so what you can do is you can actually add a new Editor Config file that in Telecode, what in Telecode will do in the background is it's actually analyzing that solution and it has now generated an Editor Config file. So instead of having to go and define all of these rules, like in Telecode is actually just define this for you. So if you see this is what in Telecode has produced in the Editor Config file. So I've already opened up a file, it's called Paint Object Constructor and something that you'll notice is that you'll notice here that it's suggesting me, I've already sort of like loaded up the line, I'm sorry, but it's suggesting that I actually take an action and actually make a fix and that is based off of the defined coding conventions and that have been defined in the Editor Config file that in Telecode has already generated for me on my behalf. And so because this lives in the solution, so you'll see the Editor Config file here, once I commit this, this will automatically be shared with the rest of my team. And so as you can tell, I could actually introduce this, but I can actually take action on this suggestion. But I also wanted to show you that what is the actual rule that is providing me with this suggestion. And if you'll notice, if I can go down there, there is a rule that says that you prefer methods to be practiced with this. And so I guess what I'm trying to say is that the scenario that this would be very, very impactful for and useful for is a scenario when I'm in a team and we just have had a lot of conversations back and forth about what should be the common style or the common coding practices for my team. And often people are bringing their opinions to the conversation and based off of their own history. But why don't we take the code as the main artifact? And we say here is how after I've analyzed our code base, these are the styles that already exist within my code base or within the solution or within this project. And from there, then bring this to the conversation and say, IntelliCode has generated this editor config file that is based off of what exists, the styles that exist in my code base, in our code base. Let's actually see if, like, do we actually want our code base to look this way or do we want to have, do we want to change it in a different way? So this actually not only promotes just sort of common or the best or the common or the historical code styles, but also gives you sort of like that starter for that conversation and also reduces that time that you're going back and forth about debates. You can just say, this is what the codes, this is what sort of exists commonly in our code. Do we want it to exist differently? Do we want it to be different? And the cool part about it is once you've actually have this defined editor config, if it's C-sharp in this case, the Roslin analyzer will start providing you with warnings and pushing suggestions for you to change. That is pretty cool. Okay, so IntelliCode does editor config formatting as well. So it's really a broad set of features that your team is doing. So is it fair to say that IntelliCode is not a feature. It is more like a team that work on certain type of problems, Mark? Yeah, that's absolutely right, Mads. We work on the set of problems where we believe we can bring this kind of pattern understanding to bear to give you good useful productivity features that augment your developer workflow, right? So we're all about that. So in a sense, we're a set of AI assisted development tools that we're trying to apply across a broad spectrum, whether that's in VS or in VS Code, whether it's to do with things that are happening at edit time, whether it's to do with things that are happening, even as Katie was just showing, that editor config artifact can play out at CI build time as well. So we don't really mind where we add the value as long as it's helpful to developers and helping them to accelerate. But our core thing is to distill down the pattern wisdom and bring it to you at the point of need, right? So that's our motto, if you like, to try and get things done that way. Yeah, so we're not just one thing. We're not just about IntelliSense. We're about any time we can provide you with helpful insights based on patterns. That's the deal. Okay. So you're mentioning this, but you said that you're basically doing a lot of work up front and then you present it to the Visual Studio user as they need it. Does that mean that the overhead of running the machine learning and AI engine doesn't happen in real time on the developer machine's CPU and takeaway performance for this video? It has only happened before, or how does that happen? That is an awesome question. And in fact, the answer is complicated. So in some cases, the answer to your question is yes. We offload, for instance, in creating those machine learning models and one really exciting development that we should talk about actually is that we've been doing some enhancement of the completions model with deep learning. Those things are high horsepower activities that we want to make sure that we keep those things on our service so it doesn't gum up your machine. We don't want you to be having to run learning locally if you don't have to. But sometimes we do a little bit of work on your local machine. Really, we're trying to optimize that as much as possible and the training where it's necessary to do training happens on remote machines where the big horsepower lives. Or we keep the training process or the algorithm to a point where it can learn very, very efficiently. So you saw when I was doing the suggestions work that when I was typing away there, there was analysis going on as I was typing as it turns out that that algorithm is running locally on the user's box. And we've done a lot of work to make sure that that that actually remains highly efficient in terms of memory usage and CPU usage so that we don't swamp you and don't take away from your editing smoothness, right? We don't want to be doing that. We don't want to be flattening your battery. We don't want to be doing any of that kind of thing, particularly in these days when we're working remotely more. You've got to be laptop friendly, right? And the reality is that, you know, it's horses for courses, as we say in England, right? Basically, you pick the right tool for the right job. And so when we are doing things that are extremely complex and when we need to do things like deep learning models and so forth, chances are you're going to find us doing that on a server somewhere on your behalf so we don't eat your machine, CPU and battery. Where it's appropriate, we will sometimes do local work as well. But we will always work hard to make sure that that's not something that's going to impact you too badly. So kind of not a simple answer, but, you know, hopefully it makes sense. Yes, thank you so much. It totally does. And it's wonderful that we can get great AI capabilities without using our battery on our laptops here. So, okay, we are almost at the end. And I want to just end this with a quick question to both of you. So let's start with Katie. Katie, what is it that excites you the most for the future of IntelliCode that you're going to work on in the foreseeable future? We've lost her. She's muted. That's such a great question, Mads. Yeah, that's such a large question. What most excites me? So I have mainly been focused on the completion space. And so for me, there's a number of goodies that are up our sleeves that I will wait to present on those or talk about them in depth. But I will say that I'm very excited about working really closely with some early customers who are just interested in dog booting and thinking, and just like learning from them about these upcoming experiences that we're working on. So I'm being very coy and not being very forthright about what we're working on. But I will tell you that it's really, really exciting and in the completion space. And I'm looking forward to just learning and hearing from our customers about these new experiences. Because I think that it will really sort of take what we're, yeah. So I think that it will be great. Awesome. And Mark, same question for you? Yeah, absolutely. I mean, customer input is super vital to us. And I'm really interested to hear other places where people have tedious tasks that they would like us to take out of their way. But I'm personally, you know, very excited about the ways we're going to be able to spread out the knowledge that's in your team and get that out to everyone in your team so they can take advantage of it. I think that's going to make a huge difference to people as they go forward. And we've definitely got some good things coming in that domain. And also new ways for developers to express their intent in a more compact way. So, you know, whether that's by typing something once and getting us to do something again or whether there are just other ways to say what they want the machine to do for them. I think there's lots to be done in that space. And watch this space. We have lots to say at Build. So come along and watch our talk there as well. Awesome. Well, thank you so much, both Katie and Mark for joining me here today. I hope the viewers out there had a good time and learned something new. I sure did. So thank you so much. We're at the other line. It's been an hour. And I hope I'll see you again next week, Thursday morning at 9 a.m. Pacific time, where we will talk about the UX lab that we have here in Microsoft that we use in Visual Studio to interview customers and see how they do eye tracking and learn a bunch of cool things and how we apply that. And what are some of the learnings we have from that? When does it work? And when does it work really well? And I think there's something in that that you can apply as well at home. So make sure to tune in next week. Thank you so much.