 Hello. Thank you for attending. Yeah, first DevCon. Everything is impressive. The size of the room is also impressive. So, we are Paul and Thomas. We are two limited researchers from the Sonar R&D team. Paul, do you? Hi. I'm Paul. And yeah, I come from the CTF scene. And in general, I like to break JavaScript things. So, VS Code is a very good place for that. And on my side, I'm more like a PHP guy. It's been giving me a job in the offensive security side for like six years. I hope it's going to keep on giving for the next decade. And I also like to do some exploration in the memory and safety world from time to time. And our company Sonar writes static analysis tools to help developers write King code. And it's like a spell checker, but for code in general. And we use zero days that we find ourselves to fuel product innovation. We found more than 150 zero days over the last two years. And we were recognized by illumination for the Pony Awards, the Portuguese top 10 webkin techniques. And we also played from time to time. And because the journey of most developers starts in the code editor, we are wondering about the security of these code editors. And everybody has usually strong opinions on their favorite code editor, like Paul likes Nano, which is weird. Don't tell me. But it's like 8% of all developers pretend to use Nano for some reasons. But sometimes you need to get the job done and you start using bigger ideas like Visual Studio or the entity J Suite or VS code, which is the most popular one with like 75% of the market share. So let's start to do something a bit interactive with VS code. You've been cloning some Git repo online or somebody just sent you some source code and you want to open it in VS code because it's your job as a developer to read and edit code. So please raise your hand if you think that nothing wrong is going to happen. So next slide, conclusion and talk is over. Nobody, right? Okay. And raise your hand if you think something that may happen in this context. Okay. So some people are not sure. So I'm going to show you a quick demo to show you that it's real. So we just received something. Oh, sorry. Maybe I can zoom in a bit. Yeah, okay. So you just receive something or you just get cloned something on the desktop. And it's just an open me, please. Just like some HTML file with a Git repo, nothing sensitive. So I go in VS code, I open my folder, I get a security prompt popping, but it's calc before this prompt. I didn't do anything. So something is fishy and we're going to get back on it later. But it's definitely unsafe to do some things with VS code. And this is what we want to show you today. It's what's wrong. There's no standard threat model for developer tools. We don't know what's unsafe. And before doing this research, when we found this thing, we were kind of surprised. Is it normal thing to do? Like, is it supposed to be unsafe to read somebody else's code? Because I mean, it's my job and every day I just open random people's PHP code that I find online. So if I do it, does it work under the secret of my system? Can I open somebody else's code? Is it any risk? Does my IDE run my code? It just happened with TB, the compiler that's used by malware analysts. But in practice, like in the background, it would run the code you would decompile, which is not something nobody wants, I guess. And also, when you have fancy features like VS code, remote development, do you know if it's safe? Microsoft doesn't say anything on this regard. And also, I think some background is that we have more and more threat actors' campaigns against developers. I would be a threat actor. I would try to target developers. It would be an easy target. They do plenty of things all day with everybody else's code, with dependencies and stuff. And recently, we've been seeing threat actors using Plex one day in the last past campaign. So they compromised DevOps through their Plex media server instance. And then I think they use some key logger to get secrets to access production. And they compromise everybody's password vaults that way. There was also Google Tag, so the threat analyst group, they reported an off-Korean campaign, say, the Snow's Korea. And somebody would reach out to some prominent infosec figures on Twitter and say, oh, yeah, I have this, kind of exploit. I can build it. Can you help me to do it? And when you're nice, internet citizen, you're like, yeah, I'm going to help you. And if you build a project, it would compromise you and run some weird PowerShell things in the background. So it's definitely a thing. And with this presentation, we want to show you around the different attacks and faces of these IDs, including VS Code. Oh, it's going to be about VS Code anyway. Paul, we start by showing you all the software's architecture. You will see it's a huge piece. So it needs some background understanding on how it works and it's developed and designed. And then we can start by looking at the most common sources of risk that we've seen in VS Code in the core and popular plugins. And it's based on research. So we will tell you when it's all on bugs. And also some research done by external researchers and we'll credit them every time accordingly. And then we will look at the reporting process from Microsoft. We have a fun anecdote to share. And most of the bug you will see requires some degree of interaction. Like you've seen, I've just opened a folder in my desktop. And I think it's expected from VS Code. So we still call everything RCE because the attacker is remote. But it would be more like something like arbitrary code execution that's done by somebody who's remote. But that's being carried on locally. So we call it RCE, ACE. It's sort of the same thing in its context. So, Paul? Thank you. Yeah, so let's look at how VS Code is built. First of all, it's based on Electron, which is basically Node.js and Chromium meshed together. So it's written in TypeScript and with a little bit of HTML, CSS and all the other web technologies sprinkled on top, which means it can also easily work in the browser. So there are two web-based versions of VS Code, probably even more, on GitHub.dev and VS Code.dev. And they are basically the same experience. Some things are different, but that's the gist of it. VS Code is highly extensible. Everybody can build extensions for it. And you can also publish them to the marketplace where everybody can install them from and search through them. And one cool thing that came up during the development of Visual Studio Code was the language server protocol. It's basically a way how you can write support for a language once, and then all editors that support this protocol can reuse it instead of having to code it for every editor again. VS Code is mostly open source. It has about 800,000 lines of code in its GitHub repo. And when you download the version from Microsoft directly, it has some small proprietary parts, but you can also download the full OSS version if you want. Let's look at the components that VS Code is made of. So first of all, it isolates and splits different things into different components. And it uses processes for that. And there are some privileged processes. You can see them in red. There's, first of all, the main process. This is the one that starts up first and orchestrates everything else. Then there's the shared process. This one hosts the PTYs, the terminals that can open in VS Code. For example, it hosts the file watcher demon and other utilities. And then there's the extension host process. And this one, as the name says, hosts whatever extensions want to run in the background, which is not UI. And then there are less privileged parts, which are the renderer process. So this is the chromium part. And it's basically just like your normal website. It cannot really access directly the file system, execute commands or whatever. It's just rendering the UI. And this is how it looks. I'm sure many of you have seen it. And the big outer red part is called the workbench. This is basically the core UI of VS Code itself. It's served from the VS Code file protocol, which is a special protocol. We will see that later. And then, for example, an extension. Here the markdown preview extension can also show some UI. And this will be in a web view, which is similar to an iframe. So it's kind of isolated. And it also comes from the VS Code web view protocol. So you're already there. You see that it's coming from different things. There are some isolation things going on. But, yeah, if one of the parts, for example, the extension wants to directly access the workbench, then this will not be allowed by the same origin policy because they come from different origins. And, yeah, this is something that comes from the web and also applies here. But, of course, sometimes the different parts have to communicate. And for that, there are different IPC interfaces. So between origins in the renderers, there's window.postmessage, which is a regular web message passing interface. Then if a UI part, a renderer process, wants to talk to some of the utility processes, they can use message ports to talk directly there. Or if they want to talk to the main process, then that's the good old election way of using preload scripts and the context bridge. So, of course, to hunt bugs and to find them, you probably want to debug VS Code because it's just easier to step through instead of having to put printf everywhere and recompile. So for the UI part, you can just use the Chromium developer tools like you know them from Chrome or any other Chromium based browsers. And with that, you can debug extension UI and also the workbench UI. If you want to debug the backend part of extensions, you have to start VS Code with the dash dash inspect extensions flag, and then you can just attach a Chrome DevTools instance and debug it normally. Or the easiest way is just you use VS Code to debug VS Code. It's easy because they have pre-made launch configurations. So you just clone the repo, open it in VS Code, and then you will see in the launch configs tab that you can start different processes or everything together under debugging. All right, now that we roughly know how VS Code looks in the inside, tomorrow we'll start with the first attack surface, which is exposed network services. So I think it's a bit generic. It's something you would find in kind of any desktop application. But in the case of VS Code, it happened kind of a lot. You may have services that start listening on the network. You may start wanting to expose a development server to locals. You may want to expose a server to everybody. You may want to expose a debugger. Or just to communicate with other components on the system. And while you would, in theory, to be safe, you would rather use some OSI PC mechanisms, it's still JavaScript code, TypeScript code, and I think people developing VS Code extensions might be like web developers or not just developers, but they are not likely to be system level developers. So they never really use OSI PC mechanisms. And in practice, even if you want to expose a debugger or local server, even a unique socket could be enough. And it would be safer to not expose anything. Because if you start exposing ports, even on local hosts, they may be a website when you browse. They may start doing requests to these ports. And even if it's listening only on local hosts, you could do C-soft attacks. You could use web sockets to connect to a service. So listening only on local hosts, it's not a good solution either. If you want some examples of CVs that were in plugins, there is one called Rainbow Farts, which is supposed to play sound when you type keywords on VS Code, which sounds silly, but still, they got 128,000 installs. So people are using this module. And Kirill F. Moff from Sneak found that it would expose an HTTP port on port 17777. And if you go on this page, you would be able to upload a ZIFAR to change the sounds it's playing when you type on VS Code. And there would be a ZIP-based pass traversal. So basically, you would browse on some random website online, and it would do a C-soft attack to force you to send a ZIP file exploiting the humidity on this page. And then you would be able to override any file on the local file system. So you would overwrite to developers.bashrc file, and you would be able to execute a trade command from here. There were also more serious examples in the core directly of VS Code. If you remember, Paul told you you can use the dash-dash-inspect option to expose an RGS debugger to the outside to debug VS Code. And it was enabled by default on VS Code at that point in 2019. And it was reported in Wendklee by Tavizo and PHR-A-A-A-A-A-A. Kind of like at the same week or same months. And in this case, this debugger was started in a local host in a random port every time you would start VS Code. But it's a debugger. It's meant to change via bolts. It's meant to run code on your behalf. So if you'd be able to reach out to this port, you would kind of gain RCE easily. In practice, it's not so easy because the way is not GSD debugger works, you first have to do an HTTP request to find out kind of a magic value with a WebSocket URL, and then you use WebSockets to connect to this destination. So you cannot just make a request to your local service at a local port because it's not the same origin. So you would first use some DNS rebinding to access the service, get the WebSocket URL, and then since WebSockets are not subject to same origin policy, you would be able to connect to this thing. I don't remember seeing any complete exploit for it, but I think it could have been exploited in that case. And now we can start to spice things up and dig into things that are more specific to VS Code in this case. Another very interesting feature of Electron is that it provides a native layer to walk with the underlying operating system for like a better integration. So you also need it because you need file access, network access, but you can also expose things to the outside. It's a pretty nice form of IPC that's not relying on the network. So VS Code Registers is VS Code colon slash slash protocol handler. And if you have a VS Code Insider, which is a NICU version of VS Code, it's VS Code dash insiders colon slash slash. And when you click on one of these links, COS kind of gets the event and says, okay, do I have anybody who kind of registered for this one in the case of VS Code, it's going to be VS Code. And it kind of wakes up VS Code and say, oh, there is something for you. And so this is one of the bugs that we found in the Git built in extension. So it's called an extension, but it's built in, enabled by default, and you cannot remove it. It's part of VS Code, but it's still called an extension. And it exposes a feature that's way to let you clone repositories when you click on the link. So when you use GitLab, for instance, when you want to clone something, it gives you the command line with Git's clone. And you can also have this open in your ID button that will ultimately bring VS Code in front and clone the repo for you. So under the root, the way it works, the Git extension implements a class called UIEndler. And it kind of handles the request that comes from the operating system. If it finishes with slash clone, it calls a method called clone with the full address that you clicked. And then it passes URL and calls ID command called Git got clone, which is something you can also invoke yourself when you press command shift and P. You can also type git.clone and it's going to run the same thing. And then internally git.clone calls something called clone repository, which is only a wrapper around the Git command. There is no reason to implement Git in TypeScript just for this feature. So they simply call Git clone for you. And as you see in the web square, if there is no space, it puts the URL as is right here. So there's something interesting here because we kind of control the way an external command is being called. And there is no risk of command injection here because it's not being executed in the shell. It's directly calling Git. But you can still add arbitrary arguments to Git. So if you put git dash dash help, it's going to display the help. And in this case, it's a Git clone. So the green part is what's fixed, what's constant, because of ES code. And in red is what we control. And we found out that if you put the option dash U for your plot pack, it's going to tell Git to, next time you have to clone something, use this command instead of trying to use your own external commands. So basically, it tricks Git into running a command of your choice instead. There are some restrictions for exploitation, which are not that interesting. It's kind of like CTFE. You need the colon to trigger this command because otherwise Git thinks it's a local repo. It's not a remote one. And it's not calling the plot pack command. And you need to avoid spaces to not be relencoded. So this is what the final payload looks like. We have this VS code colon slash slash thing, VS code at Git, clone, we pass URL, dash U, and then we do some tricks to not have spaces. And the resulting indication is this Git clone dash U thing. So I'm going to show you what it looks like in practice. And let's say you go online on some Git lab self-osit instance. You want to use this feature. So you're going to clone. You click, I want to open it in my VS code. It prompts you to want to clone, but there is no way for me to check what's in there. So I have to trust it. So I'm going to say yes. I put it on my desktop. All right. And once again, we have this calc just popped up. So yeah, kind of unsafe in the end. And they were very similar finding, I think, a year before by Smory from Shilder. Also an argument injection when you would click on links for the remote development extension. I am not trying to call because it's closed source and we didn't, like, spend time to dig into it and make it readable. But it's basically the same thing. Like, the old spot is directly being put in the command line and you can inject arbitrary arguments. And you would exploit it with dash o proxy command. And it's a quick outbreak. I love argument injection bugs. I think, Paul, you're getting used to it. And we found many, many argument injection bugs all the time. So we created a page that lists all small dash u dash o proxy command. All these argument injection vectors that's going to help you to execute arbitrary command when you find argument injection bugs. So it's free. It's open source. And it welcomes contribution. So if you go on this page and you find out that we missed good tricks, feel free to add yours. And now we can also dig into something that's even more specific to VS Code. And that's workspace settings. There is a support for a pair of workspace settings that comes along your source code repo. So if you want to share settings with other developers like lintel configurations, like things to do and you want to share with everybody else, you can create a file named dot VS Code slash settings or JSON. And it's a way to share the settings. And it's automatically loaded when you open the folder. So the question is, if you put your security research out on, maybe can I override sensitive settings? Like, are there maybe sensitive settings? And in 2017, so a long time ago, Justin Stevan found out that there would be an option called git.pass, also from the git's built-in extension that you could change to say, no, no, git, it's not in my slash user slash bin slash git. It's somewhere else. So if you ship a folder, a project, and in settings, you override this setting, the git.pass thing, you could force it to invoke other commands instead. And as soon as somebody would open a folder, it would be loaded and git would run. And it would call this other command instead. There are also some interesting tricks for the expectation because it would have to survive a first command dash, dash, version invocation from the VS code install folder. And then it would be called from the project folder. So what it did was to set git.pass to bash. So the first code works. And for the other one, you would plant a file called refpass in this folder. And it would be called instead, and it would run its payload. And there was also another example of a method that would use local data directly from by DDWalker in 2020. When you do not just development, you would use NPM to store all your dependencies. And to this end, you write everything in a file named package.json. And if you kind of over a dependency on this file, the NPM plug-in kicks in and tries to fetch information and to show you information like the description of the package how many installs, everything. And it would call this command and unsafely concatenate the name of a dependency in the command. And there would be a command injection in there. It was later bypassed by Justin Steven again because they start to try to fix it. They started the dating's name of dependencies and it's a fail open thing. So basically, if you don't match what they expect you to be, something unsafe, it would say, yeah, it's safe. It's like, it's a block list approach instead of an early list approach and it was once again proven to be not the best way to fix certain abilities. And now we can get to workspace trust which was also part of the name of the stock. So I have to explain it to you. It was a feature introduced in 2021 in VS Code that was, its goal is to reduce impact of malicious folders. So everything you've seen before where we would run arbitrary commands and where we would use local data could be kind of blocked by this new trust based system. And you would have a new security assumption. You would say all untrusted folders are safe to open in restricted mode but as soon as you start trusting something, it's not a security bug anymore if somebody executes commands in this case. So trusting a folder is always unsafe and untrusted folders are supposed to be safe. And this is what it looks like. So this is this prompt that says to trust the author of this file in its folder and it pops every time you open a folder. So you may be in front and say, oh, I want to click here as every single time but it's maybe not the best idea. There are some documentation that say this is what you risk if you press yes but you will not expect the node to be unsafe. And the way it works under the hood, all extensions including built-in ones they declare their own capabilities in their own packet of JSON file and the default value is false. That means this extension won't run in untrusted workspaces. So you kind of miss features from VS Code just because you clicked no, I don't trust. So you're not kind of like living the life with VS Code with all the fancy extensions and you have a very limited IDE which is maybe not what you were looking for in the first place. So they also introduced something called limited that lets extension say, okay, in this case, if it's untrusted, I'm not gonna run this command. I'm gonna remove a few of my features that could be unsafe but everything else will run. And once again, to build an extension, I think it's really a gift for us. It runs in untrusted workspaces. I'd say it's supported so that means it's gonna run in all untrusted workspaces and untrusted workspaces also. And when we were trying to see if there were security issues with this extension, we found out that Git, so the command Git, supports several level of configuration as there is a system-wide configuration in ETC Git config. There is a global one in your own folder in .git config and there is a local one in .git slash config of the current folder. Indeed, but yeah, cloning does not retrieve the local configuration, nor the other kind of configuration. They have to be files you created yourself on the system but you can put Git repos in archives. So it would come with its own local configuration that may be unsafe. And just in Steven and us we collided on some research where we tried to see if we would be able to kind of hide Git repositories in subfolders of Git repositories. So we would be able to clone these files when you clone the main repository. And in a Git config, you can have interesting directive. So like the one we have here, which is called the FSMonitor, which is a way to tell Git, please run this command to see if there were some changes since the last time you run this command. So the default one is really good, but if you have a huge monorepo, if you have a thousand files, it's gonna take some time to find out the differences. So sometimes you may want something more specific for better performance and not like in the IDE or when you get in your developer shell. So it's not specific to VS Code and we found out that basically planting malicious configuration in Git folders, it's pretty easy to trick people into opening these folders and to file them to execute arbitrary commands. And in fact, the first demonstration we've shown in the intro was based on a malicious Git folder that could have been just as if our sub-repo. So this was a command, which was a calc and we have the workspace trust thing because the Git extension runs in trusted workspaces. It runs before the proper displays to the user. So you have no way to prevent everything. You cannot just review every single Git repo that you're cloning or that you're downloading from the internet. And to show you some side thing that we found, so we told you it worked in VS Code, but it also works in developer shells. So let's, in this case I have ZSH, we've see all my ZSH team that includes a Git extension to show in my shell on which branch I'm running and give me some contextualized information on my Git repo. So it runs every time I CD in a new directory. So if I go on the desktop and then I go on my malicious Git repo, it's gonna pop again. So even seeding into some things that runs Git in a directory and having something that runs Git on my behalf, it's unsafe. So in the end, all the developer tools say don't really kind of acknowledge the security risks and they say, oh yeah, it's up to you to not run, to not work with malicious Git repos but you can't just review everything by yourself every single time. So there are a lot of things out there that are still affected by this thing like the OMS and ZSH prompt and we've also reported it to many IDEs, many shell integrations, some of them try to fix it but in the end, it should be on Git to fix it and they said, no, it's an accepted risk, you should not work with untrusted Git repos. So Nopouls gonna show you a few things about XSS in VS Code. Yes, so as I told you earlier, VS Code has a built-in browser, part of it is Chromium, so of course there can be cross-site scripting and in an IDE like VS Code, XSS bugs can be very nasty because some parts of the UI always have to be kind of privileged. I showed you the red part, the workbench, you will have buttons like save file or run my build command and if you press that button in the UI, that should happen, so the UI has to have a way to make this happen. And the other thing is that extensions can have their own UI part in web views as I showed you for the markdown preview extension and even if we would say, okay, all the code of VS Code itself is clean, no issues, no security vulnerabilities, all the extensions that you install from the marketplace could still have something and that's then the first foot in the door to start your XSS attack. So let's look at the first example. This is a cool one. It was found by the Grand Pew and Sirius of Electrovolt. They presented it here at DevCon last year so we will go quickly over it. Basically, they found a way to control the content of an extension's web view and in this case, the markdown preview extension and then they used a meta HTML tag to redirect away. So basically now their attacker page is running in this web view, in this frame and from there they can start their attack. So they already took control over the frame over the web view and then they use the post message, message passing interface to talk to the outer part, to the workbench and there was an unsafe handler that allowed them to load any file in that privileged frame but there was a clue. They can't just load their own file and from the attacker server and run any code on the system. They had to find a way to actually load the file via the correct origin, via the correct protocol and they found the bug in VS Code itself so they could do that and after that they could use the node integration which is a electron setting basically that says okay, this origin can just use the Node.js API as to run commands and so on. Then there was a pretty similar finding this time by Justin Steven. He found that in the Jupyter Notebook extension you can also render certain things as markdown and then it's the same thing over again you can put some HTML and first of all he only found how to leak arbitrary files because there's this fun CDN origin that's a special one and yes, the plus in there in the domain name is not a mistake and you can basically put an absolute path and read any file by just loading it and then Luca Crattoni by Doyensack used it to improve it to an RCE basically by using the same last stage as the other guys to compromise the privileged origin. And then there was a pretty different finding this was by Zemmes of Google and they also used the Jupyter Python Notebook markdown trick to render something from there as markdown but they didn't have to go up this chain to first compromise the privileged origin and then use the node integration they found a way to go around that which is they just created a link tag with a command link from the command protocol and this is a special one in VS Code so this allows you to execute internal VS Code commands so not a OS command but basically something like, okay focus the task bar or open the terminal and in this case they auto clicked this link with JavaScript so the command would be executed and they used the command that opens a new terminal because with that you can specify which shell executable and which arguments to execute and that gives you direct arbitrary code execution. And from this we noticed a lot of the UI of VS Code is actually markdown, the tooltips, the status bar at the bottom, the menus, a lot of things are just markdown strings being rendered because it's easier than writing HTML I guess and if there's user or a tagger control data being interpolated into this markdown then markdown injection can happen and if that happens and in the place of the code the markdown string is set to being trusted then you can use these command links in markdown as well so if you have markdown injection you can just put in a link to again use this command to open a terminal with your own command basically but it always requires user interaction because the user still has to click on the link and it can't be auto clicked unless you have JavaScript execution in the first place. So with that in mind we looked at some third party extensions for example GitLens, this is an extension with 25 million installed, it's pretty common and pretty popular and there we found that while they rendered some hover tooltips for some commits we could inject markdown and in this case the data came from a commit message and it was then unsafely interpolated into a markdown string where we could then add a link and in this case we didn't use the other one because we also wanted it to work in untrusted workspace to again bypass this workspace trust prompt so we used the install extension command and because the marketplace is open I can just publish any extension there you could publish a malicious extension and then use this command to install it at the victims VS Code instance and this was fixed pretty quickly in version 14 CV is still pending but it's a first good example and this is basically how it looks like so you open some repo and you can see the blue bar on top so it's still in the untrusted mode but if you hover a line and then click on this very innocent looking message then a calc pops showing that we could have executed any command and we found a similar thing in the GitHub pull requests and issues extension which has 13 million installs so also pretty popular it's by GitHub themselves and again in a hover tooltip we found a markdown injection and in this case it didn't come from the local repository or the local workspace folder but it can be exploited remotely which means let's say a maintainer is working on that project they use this extension to look at issues and PRs and then I as an attacker create a malicious PR which has the exploit payload in the body and if they then use the extension to look at it again we have the markdown injection and if they click the link it's GG so this one was also fixed pretty soon CV is also pending and this is what it looks like so first we create this again very important looking issue on GitHub on some maintainers project then if they use the extension to look at this again there's a very innocent link and if they click on it a calculator opens so yeah this was basically the last of our attack services as we've seen there's a lot of different things that can go wrong so let's see how things went when we were reporting the bugs that we found to Microsoft so we use the official MSRC platform for that it's pretty cool compared to bug bounty platforms because it allows you to speak about the bugs after they were fixed which we would like to do for some people it has little bit better reputation because of delays but for us personally we had a good experience not many delays and it's a nice thing because everything's centralized there even the attribution and you can always see or it's in development it's being released soon so you don't have to wait without a response from the vendor for months so let's look at the bounties we got so for the first one which was the Git local level configuration bug we got a 30K bounty which was pretty cool we donated it to charity and we even wondered a little bit why it was so much because yeah we didn't know and it fell into the Azure bug bounty and we even asked them why it's so much do they have a hosted VS Code version somewhere but we never got a response so the year after we found something again which was this time the protocol handler argument injection and we thought okay easy we get another 30K which we can give to charity pretty cool but the response was oh wait a minute last time we awarded you the bug bounty in error and actually all extensions including the built in ones that are shipped with VS Code out of scope sorry this time you're not getting any bounty and yeah we kind of disagree we think extensions that are built in and shipped with it should be in scope for bug bounty but at least they still fix it right and this even brought us onto the MSRC leaderboard in Q2 last year and the things that are showed in GitLens and the GitHub extension brought us on the leaderboard this year which was pretty cool because we could go to the Microsoft researcher invite only party yesterday. All right so now let's wrap things up we've seen a lot so the first things that we learned is that CVEs hint at buggy components a lot of the fixed vulnerabilities were bypassed by Justin Steven because he just looked at the patch and saw that it's not complete and that's probably will probably the same case in the future so if you want to find some easy bugs just look at what they fix and see if the patch is complete the next thing we want you to remember is if you click trust you lose for example the Git local configuration exploit still works if you trust the workspace because the way Git works and it's basically not really possible for VS Code to fix it and they also say if you trust it's your problem now we want you so please don't click on trust just to get away to get this dialogue that necks you away think about it and if you don't trust it press I don't trust it and then we found that attacking desktop applications is fun and it's a lot different compared to attacking server applications that you can reach over the internet because it has a whole new set of assumptions and attack surfaces and we've also seen that bug bounty scope doesn't always reflect the real threat model again we think that built in modules should not be excluded from the bug bounty scope but we can change that and finally it's still unclear for many developer tools what's the security threat model they're not built with security in mind it's not super clear what's the responsibility of the user and what's the responsibility of the tool to make sure everything is safe no random commands are executed and if you don't know that it's your job as a user then you can't really protect yourself we personally like the principle of least surprise so we think the tool should not do some magic under the hood that you don't know will happen so yeah it should warn you and that's why we think the workspace trust that was added is a very welcome addition and a good start and we hope other developer tools will include it as well thank you